Category: Uncategorized

  • Netdata Having Trouble Installing On RHEL 10

    Here’s the skinny: Running the Netdata install/register script on RHEL 10 didn’t completely work on the two VM servers I recently constructed for a future post diving into Netdata. I spun up a Debian 13 and Ubuntu 24.04 server as well, and these installed Netdata and its plugins without any issues, so it seems it might be a problem with RHEL 10 or just RHEL in general. I haven’t tried this on RHEL 9 or a RHEL comparable like Rocky or Alma Linux, so that might an additional edit add-on in the future if I test those. All the VM’s used a Bridged Network Adapter to access the internet running in Virtual Box, and were all registered with a Red Hat Developer account. Even after booting up a third RHEL 10 server, it still gave the same error, so I was able to get a screenshot.

    Here’s the script error after spinning up a 3rd RHEL server

    Update Sept 30th 2025:

    I did a test run with Rocky 10 to further narrow down this issue, and it worked with no problem. The only real difference in RHEL and Rocky is the subscription needed to register the device on RHEL, and if it’s not registered it won’t have access to the app repositories. That being said, it can now be narrowed down to something going on in RHEL that causes the epel-release package from not properly pulling during the Netdata installation via their script. Something in the script is causing it to crash while loading the RHEL repositories.


    Update Oct 3rd 2025:

    After a little more digging, and a fruitful post on Reddit, it seems you still have to manually install the EPEL repo using the conventional means. One commenter on Reddit perfectly described it as a “chicken or the egg” scenerio: the epel-release command activates the EPEL repo, but that command can only be run after you enable the EPEL repo, because it’s only in the EPEL repo… Pretty brutal. Rocky 10 was able to circumvent this by including their Rocky Linux 10 - Extras repo by default, giving the option to activate EPEL with only epel-release (hence the screenshot above showing it working with no issues). This should be something in RHEL by default as well, since it gets rid of the extra commands to get EPEL, but it’s not on by default until you activate it.

    This is, ultimately though, a Netdata issue with their script. It is because it should have a way to pull EPEL repo activation command when it recognizes a RHEL operating system. This could be done when the script checks all active repos, and depending on which RHEL product and version number it’s on, be able to pull the correct commands down before moving on with the rest of the process. This would alleviate any hiccups in the process, and we could’ve avoided this whole blog post in the first place!


    What Exactly Happened?

    When using the Linux script to add a node it seemed to fail installing it with their rpm package with RHEL, and notified its falling back to a different install method, which is grabbing the files directly from the Netdata GitHub. This way will still install Netdata and connect your node, but the script installing this way didn’t include everything that is provided in the former install method.

    I noticed this initially after connecting my two RHEL 10 VM instances with my dashboard. Under the logging section for both RHEL servers, there wasn’t logging enabled at all, so I couldn’t see any of that data being reported and categorized into the Netdata console display. It was completely absent of any data display, and just contained text noting that logging is not set up and had a hyperlink to learn more about Netdata’s logging features. This is a default setting that shouldn’t require any setup, and obviously seeing system logs in a dashboard is one of the attractive features about a monitoring service like Netdata, so I had to find a way to fix this for my own sanity.

    The Fix

    Skipping the details as to not bore you with my about-an-hour-or-so of testing a couple different methods, the way I ultimately found to fix this is to pull the netdata.run file straight from the Netdata GitHub and manually execute it outside of the original setup script. This eliminates the need to use the Epel repository, which they utilize on their kickoff script, since epel-release was part of the original issue. I already had these two RHEL 10 server instances running and attached to Netdata, but running netdata.run didn’t cause any disconnection to Netdata during or after execution: It just added the missing packages, restarted the services, and what do you know the services that were missing now showed up. The script can detect already installed files and connected nodes, so luckily it just filled in the gaps and didn’t do a complete overwrite.

    Before I pulled the stable release of netdata.run from GitHub, I made sure to cd into the /tmp directory to keep the file out of the way since we only need it just this once. Also since we’re manually downloading and running this file, you have to make it executable with the chmod command after you download it. I did run this while I was root, but remember you have to start these with sudo if you are opting to run these under a user account with sudoer privileges instead:

    cd /tmp
    wget https://github.com/netdata/netdata/releases/latest/download/netdata-latest.gz.run -O netdata.run
    chmod +x netdata.run
    sh netdata.run
    
    This is the tail of netdata.run after execution

    Additional Thoughts

    The only thing I wish I would’ve done differently is used something like tmux in the terminal so I could scroll back and see what it said when it failed, instead of the red blurs that I did see fly by during the install. It is common practice to use tmux when accessing a machine via ssh anyways, but since this was a local install for testing purposes I omitted it during this process. Another option could’ve been sending the output to a log file, which would be a more permanent solution that I could access to review the material.

    Here’s the Logs section when it’s operational
  • Is Nala Still Good After Apt ver.3 Update?

    Debian 13 Trixie on Apt ver.3

    With Debian 13 Trixie becoming the new stable branch, apt has been upgraded to version 3. A lot of much needed and welcomed updates to the UI have made their way into this new version, making it the cleanest that apt has looked. Upgrades from the previous version are:

    • Includes better output on the terminal, bringing columned and colorized output to give a more pleasing display to the user.
    • Puts warnings towards the end so they are no longer buried in the output.
    • Uses a new dependency resolution engine “solver3”, which uses better logic on which packages to install, keep, or remove.

    Nala

    Available in the Debian repository, Nala is a frontend for apt, and is a fix to some of apt’s longstanding problems. Nala offers a cleaner UI, used colorized outputs before apt even integrated that feature, and tends to overall be a better frontend option for desktop users. Even though Nala has been available before this new upgrade to apt, it still offers improvements over the current version like:

    • Better mirror selection with nala fetch, which will ping all Debian repository mirrors and give you a list of mirrors from fastest to slowest, giving you a definitive way to select the fastest choices for your machine.
    • Supports parallel downloads like Fedora’s DNF or Arch’s Pacman package managers, so download speeds are much faster than just using apt which utilizes sequential downloads.
    • Contains a transaction log, which you can view recent changes done to packages and even use it to roll back updates if needed.
    • To update packages, you just need the one command nala update instead of the traditional apt update and apt upgrade, which is a great quality of life feature.

    Is Nala a No-Brainer then?

    Well, just like anything, it’s not always that cut and dry. Nala isn’t without its own problems, and that’s something you have to consider when adding an additional layer of software: doing so introduces another potential point of failure. This comes into play when considering Nala for other use cases like servers; does this offer enough upside to be warranted?

    Most servers would be best advised to get updates from a central machine getting its repository updates from the internet, so mirror selection for speed isn’t as crucial, and the visual upgrades aren’t as crucial when observing servers through a cloud console or in a remote terminal with ssh.

    Now for desktop use, the upsides are far greater than the potential downsides, because in this environment you will benefit more from the visual changes Nala provides, and desktop users tend to be more hands on with running manual updates instead of unattended ones, so you can see in real time if an error occurs.


    This topic has been played out online before, so it’s nothing new, just like this recent thread on Reddit discussing using Nala. The new update to apt seems to be bringing the topic up again, but the collective thought seems to be equally divided between using Nala and not using Nala. Some users even share Nala breaking or causing issues, so that reaffirms the sentiment I had talked about the paragraph before.

    At the end of the day weigh your options and use what’s best for you, because in the true nature of open source, the choice is ultimately yours! If you have experience with Nala, leave a comment and let me know your experience.

  • Fresh Install Debian 13 Trixie manually with Btrfs, Timeshift, and Grub-Btrfs

    I Made My First Guide on GitHub!

    This goes back a couple of weeks now since it was completed, but I’ve been updating it with incremental improvements to ensure quality and accuracy since then. This was born from accidentally nuking a system upgrade from Debian 12 to 13, but then taking the opportunity to fresh install without an ext4 filesystem, and use something that can recover a botched system more easily. Lemons to Lemonade!

    The main goal of the guide is to give a detailed walkthrough of setting up the new stable branch Debian 13 with:

    • a manually sub-volumed btrfs filesystem,
    • the application Timeshift to facilitate automatically scheduled snapshot creation
    • Grub-Btrfs so the snapshots are readily available in your boot menu for easy rollback to previous system states.

    There are many more options that you can do to facilitate btrfs snapshot creation, even automatic ones taken before every app install/update/remove using scripts for Timeshift or Snapper, so I hope to dig into those further in the future. In my mind, the ultimate setup to cover all bases will be using Snapper for the automatic snapshots creation for every app install/update/remove using the aforementioned scripts, in conjunction with Timeshift using rsync to schedule complete backups to an additional hard drive.

    The GitHub repository is here: Debian-13-BTRFS-Install-Guide

    If anyone wants to contribute, you’re more than welcome to submit a pull request!

  • Obligatory JosephTSuarez.com Going Live Post

    My First Post: Going Live!

    Welcome to my blog! This is my first post as my site officially goes live.

    I’m starting this blog to share my tech journey with you, as I learn Linux administration and explore all the fun stuff in between.

    My journey began about a year ago when I discovered this thing called Linux. What started as a side project to customize my first Linux distribution quickly turned into a deep dive I never saw coming. Since then, I’ve been hooked.

    This blog will be where I:

    • Document my projects
    • Dive into new (to me) tech topics
    • Share how-to guides based on my experiences

    My Goal

    My main goal is to become proficient in Linux System Administration, with a focus on operating and hardening enterprise Linux operating systems like Debian and RHEL.

    Stick around and follow along as I navigate this journey – with all the bumps, wins, and lessons along the way!