Posts

An Nvidia RTX A4000 GPU in the Dell PowerEdge T640 without the GPU kit

Image
I jJust wanted to report the success of replacing my Quadro P2200 (5G) with an RTX 4000 (16G) in my T640. The A4000 GPU is a 140W single-slot GPU. It takes power from an X16 PCIe slot (75W) and needs a 6pin PCIe connector for the rest (65W). Since I like my T640 silent, the GPU PDU board was not an option for me. On the T640 platform, adding the GPU PDU board under the mainboard makes the machines require to add the infamous and noisy GPU External fans (pictured below, along with the GPU PDU Board): The lower half of my T640's front is occupied by a 16x2.5 SFF SAS/SATA backplane which luckily has a cable with two SATA connectors on its back: With that in mind, I did the follwing Math: a SATA connector can carry up to 54W, which meant a total of 108W for the two SATA connectors. The 6pin PCIe connector required to power the A4000 will likely not exceed 65W since 75W is coming from the X16 PCIe slot. With that in mind, a very Simple StarTtech dual-SATA to PCIe 6pin adapter was obtain

My Ultimate Cooling config for the Dell PowerEdge T140 (No Soldering required)

Image
 I have a Dell PowerEdge T140 with an 8C Xeon @ 3.4Ghz, 128G of RAM and about 32Tb of 12Gbps SAS Flash. The default cooling configuration works but it tends to become noisy under load. Also, the Xeon CPU tends to reach into the 75C/78C, too close to the 80C limit for my taste. I like my Home Lab silent and powerful but in the summer, my T140 was often louder than the T640. So Here is what I did: - CPU FAN: The Default heatsink has to go, there's nothing to salvage here but keep the fan, we'll need it later. The heatsink itself would probably be better suited for a 35W CPU but it's not good enough for an 8C/16T Xeon SP Gen2 (E-2278G): The FAN has a super proprietary connector that I have not yet identified but we'll keep it for later. The good news is that -any- Dell PowerEdge T340 heatsink will work in the T140. These heatsinks have a lot more fins and don't require you to do tricks to fit the heatsink to the mobo. The T140/T340 mobos are proprietary and it is not e

3 Decades of OpenWindows

Image
 Yesterday - Jan 1st 2024 - I ported OWacomp + XView to RHEL8 and gcc-8.5. Ihad been using OWacomp on RHEL8 but these binaries were being built on RHEL7 and gcc-3.4. In the process, I tested some of the 64bit XView codebases available on Git Hub but rolled back because introducing the boost pre-processor broke some OWacomp apps (most notably the filemgr). In the end, I changed less than 500 lines of C code and I'll be suitable for a few more years. I'm not a C programmer, I work in IT, but I am not a programmer per trade. If I can keep using the Desktop env (olvwm) that I've been using with just a few hours of code each year, I'm willing to see how far the rabbit hole goes. I tried asking ChatGPT for help, but the bot did not even -look- at the GitHub repo I had provided, and its help was more or less paraphrasing the C compiler errors. It felt like working with 'Captain Obvious'. I know it's a lost battle, but I already have a contingency plan in place for

Silencing Ubiquiti USW-Pro Aggregation Switch Fan Noise: A How-To Guide

Image
Introduction If you're an Ubiquiti USW-Pro Aggregation Switch owner, you may have noticed that the fan noise can be quite loud and disruptive. Fortunately,  there is a way to silence the fan without compromising the performance of the switch. In this guide, we'll walk you through the steps to reduce the fan noise and make your switch run more quietly. As much as I liked very much my US-16-XG, I had run out of SFP+ ports and decided to upgrade to a bigger switch. A bigger switch Specs are listed here: https://store.ui.com/collections/unifi-network-switching/products/unifi-switch-aggregation-pro Once rolled out into production, it proved to be a decently silent unit by default. This switch comes with 4 x 40x40x20mm fans and by default the fans spin around 3.5k-4k rpm (here the env show on a cold switch): us32-0v2-US.6.4.18# swctrl env show General Temperature (C): 34 Temp Sensor      Temp (C)     State            Max Temp (C)  Alert Temp (C) ===============  ===========  ========

Veritas Cluster Server Cluster Manager (VRTScscm) 7.4.1 on Linux

Image
 I'm still using VCS/Infoscale for parts of my homelab. This allows me to sometimes catch issues ([1], [2] and [3]) that others have not found before. The Legacy GUI One way to administer VCS with a GUI is to use the VCS gui (hagui). Unfortunately, the rpm of VRTScscm which was provided by Veritas is quite dated and on my RHEL systems, it tends to 'freeze' from time to time. I recently noticed that while VRTScscm hadn't been updated since the 6.0.z times, a few updates had been released for Windows platforms only: When I tried the 7.4.1 Windows version on Win10, it worked a lot better than the 6.0.1 Linux version (no freezes, etc..) so I decided to investigate if I could update the Linux version with the bits from the Windows version. Interestingly, it turned out to be quite easy.. Updating the JARs from the Linux version with the GUI JARs from the Windows version delivered a working 7.4.1 on Linux. (and no more GUI freezes!). Java is cross-platform and this is clearly

Takeaways from 3 years of running Red Hat Satellite with ZFS (ZoL) on RHEL

Image
 Red Hat Satellite provides distribution for rpms and containers for many Red Hat Products. It's an over-simplification but it works well enough for the purpose of this post. As someone who works with RHOSP and RHEL on a daily basis, I find it convenient to have a local Red Hat Satellite VM and all of my permanent or temporary RHEL/RHOSP nodes connect to it. Why use Red Hat Satellite at home? There are a few reasons: - It provides a sort of local Internet Cache of rpms and containers: I might be working on RHOSP 16.2 and perhaps I will need to stand up a temporary RHOSP13 cloud to assist a customer or work on a BZ. This is usually fine but it could impact the Home bandwidth at the worst moment possible, especially when the rest of the Family is taking remote classes. - It's always faster to cache everything locally and in some cases of Network congestion in the middle of the day, it helped me save the virtual deployments I was launching in my Lab. Why did I want a VM for someth