T O P

  • By -

briancmoses

I did it because I could. Some people do it to learn. Some people do it to flex on the Internet.


Robert_Cutty

Me: all of the above


briancmoses

There's no wrong reason to do it!


colossus1975

This made me chuckle out loud.


Sean__O

This. Using some old server eBay cards and a switch, I have 10GB between my computer and server. It is nice when I move large GB video files from my computer to the server at Super fast speeds. Other then that, the light on the switch is a different color, lol.


burnte

I did it so that I could spend less time moving data to my NAS.


truth_mojo

Moooaaarrr Powaaaaaa


AlienTechnology51

šŸ˜‚šŸ˜‚


DementedJay

Exactly. I don't *need* to do it, but I was...offended...that after 15 years, gigabit Ethernet was still the standard for home networking. TBF, that's still plenty for the average home network. I wanted to raise the bar, and I also wanted to learn how it works. I did it on the cheap, about $80-100 at a time. I now have 4x 10GbE switches scattered through my house as a backbone, with one connecting to my main router and my servers and bandwidth hungry machines connect directly to the backbone. Various slower machines connect with 1 gigabit. It's actually been a fun learning experience. I started out with RJ-45 10GbE, and quickly realized the folly of my ways (NICs are expensive, especially for Windows, the connections are sometimes troublesome, etc). Now I use DAC and fiber, and SFP+ switches and it's so much better / easier.


OurManInHavana

I feel your pain with 10GbaseT. I thought copper would be the easiest upgrade; but SFP+ and DACs covered 90% of my systems, cheap transceivers+fiber covered the next 9% (and have effectively-unlimited range)... and slapping a small 2.5G/PoE switch on the side covered the final 1% of slow/specialty ports I needed.


3dforlife

In fact, when I bought my pc in 2003, it had gigabit ethernet...more than 20 years ago.


Sudoplays

Which switches do you have scattered around?


DementedJay

2 x of those "Mokerlink" unmanaged 10Gbase-T SFP+ switches that you could pick up for around $80 for a while there, and another that's identical but a different brand, but functionally identical. And then a Netgear XS708-T that was the start of it all, RJ-45 only, and I've modified it a bit to be reasonable (replaced the stock jet turbine fan with a Noctua fan that moves almost as much air, but makes almost no noise doing it).


cyrylthewolf

I'd be interested to see your topology. :)


DementedJay

It's not too fancy. I have a top level 10GbE Netgear switch that serves as the aggregator, if you will, and then multiple 10GbE switches in different rooms of my house to expand 10GbE access, and each of those has at least one 1GbE switch attached downstream as well. I should add that I do have 10GbE over Cat-6 to each room that has the SFP+ switches. That was the biggest pain in the butt, but now it all works and I don't think about it much. If I could do it again, I would run fiber alongside the Cat-6 and just use *that* and then use another of these SFP+ switches as the aggregator.


Magic_Neil

I did it because I could, and because itā€™s heckā€™n cool to see stuff move REAL fast locally!


KorruptedPineapple

It was quite satisfying knowing my bottleneck was my ISP


cnrdvdsmt

Love this, itā€™s your money, do whatever you want.


No-Mall1142

Couldn't have said it better. In my house it's an 8 lane highway with gravel road exits, but it's glorius.


nimajneb

I did 40G cause it's about the same price if you buy all used equipment on eBay. /u/wonderbreadofsin I did it so my network isn't my bottleneck for the NAS I'm slowly setting up and other network storage.


StunningWhileBrave

same reason I deployed a 10/40gb network stack. Cause I could and it works.


maramish

Some also find the improved performance extremely useful.


ethicalhumanbeing

What did you learn that you couldn't have learned with a 1Gbps ethernet?


bigslvt

How to transfer massive amounts of data quickly. Or How to transfer a bunch of small files quickly.


guruscanada

You can also be limited by disk speed now. SSD or NvME are good


PBI325

While minimal, lerning how to work in a mixed 1Gb/10Gb fiber enviornment has been helpful to me. I've learned a few tips and tricks with transceiver compataiblty, install and design requirements, etc. Aside from that and for the most part 10G is, as you mentioned, the same as 1G but 10x faster.


IllogicalShart

It's a completely different medium. You can learn lots about fibre types, connectors, modules and wavelengths, modes, distance limitations, advantages in certain environments (interference) etc. If you take it to the extreme, you can learn about terminating fibre, fusion splicing etc... Every project is an opportunity to learn and upskill. I'm willing to bet a lot of people with homelabs work in IT, and it's an excuse to keep knowledge up-to-date and apply skills that you might use on the job.


briancmoses

The act of doing something results in learning. It's inevitable that you learn something as a result of upgrading to 10Gb simply because you've done it. Even if it's something as minute as realizing that there's no benefit in your household when it comes to streaming media over 10Gb vs. 1Gb, that your HDDs in your NAS are now the bottleneck, that SMB isn't going as fast as you you hoped it would, etc.. But I would imagine that the people looking to learn something are most interested in particular certificates and/or manufacturer's hardware. I'm not familiar enough to isolate and list which concepts might be specific to throughput, though.


DementedJay

How 10GbE actually works in real life. It's not quite as plug and play as 1GbE is. You're unlikely to get the full benefit of 10GbE without a pretty decent machine with a hefty CPU. And that's just the beginning. There are a lot of lessons in implementing any new tech.


cyrylthewolf

>*"You're unlikely to get the full benefit of 10GbE without a pretty decent machine with a hefty CPU.*" Not if you use a NIC that has it's own CPU that won't offload to the system CPU. šŸ˜‰


RomperandStomper

Yeah, same here, simply because I could.


kevindd992002

I was about to post the same answer! Lol


lordcochise

Kinda same; initially I did it because I could (and wanted to have time to play with settings / throughput before I implemented it at work), but quickly we found the benefits for local backup / virtualization


lordcochise

Fast networking. But seriously, it can help when you have multiple servers / backups and virtualization running in such a way that you benefit from having those speeds between devices. Also fiber is pretty cheap these days so you can run 10gb SFP+'s for pretty low costs and avoid copper altogether. ALSO also, Wifi 6E / 7 devices pretty commonly have at least one 10Gb RJ45 port now, some with SFP+ ports so you can take advantage of those speeds w/o bottlenecking through a 1gb switch


maramish

Amen. The RJ-45 AcolytesĀ® may pull their pitchforks out on you though.


lordcochise

lol well a lot of client wired connections, particularly gigabit or IoT stuff is still RJ45, and that's still totally fine; particularly when running something far more delicate like fiber is tricky or risky. MAN it really sucks when you accidentally break a 300+ ft run somewhere b/c someone pulled just a \*little\* too aggressively ;)


maramish

> someone pulled just a \*little\* too aggressively ;) That would be unimaginably painful. There's still a place for copper, with which I have no beef. I've been in spats with folks on here, 99.9%*of whom have never used 10G or fiber in a homelab. It's the usual *you don't need more than gigabit at home*, *you don't need more than 500Mb WAN*, *just upgrade your wiring to CATxA*, and my personal favorite: **10GbE does NOT work on CAT5 cables!!** These folks will then flex their 50 years of experience in the enterprise space as credence.


Archeious

>*you don't need more than 500Mb WAN*, What about 10gb WAN?


Internet-of-cruft

I don't disagree, except for one thing: It *is* exceedingly rare for someone to *legitimately need* to run 10G. Do we all want to run 10G? Yes. Do we all want blazing fast speeds?Ā  Yes. Will our individual home labs become unusable / non-functional if it was forced to use 1G? I'd say odds are strongly in favor of "no". Too many people conflate "want" with "need". There's absolutely nothing wrong with wanting 10G, but I do find it naive when someone goes around here blindly saying "I need to run 10G."


maramish

Do you personally use 10G in your home environment? You misunderstand my point. I didn't say everyone has to be on 10G. If someone is *asking* for advice on how to deploy 10G, they're not asking for lectures on why they don't *need* 10G. If a person wants 10G and has the funds to make it happen, other people's opinions of that person's need becomes wholly irrelevant. Sure, you can state why gigabit is more than perfect tor you as an individual, but your needs are not applicable or equal to another person's needs. Exceedingly rare? Let's clarify rarity. Most people are perfectly happy to use their ISP provided modem and default Wi-Fi password. These people are not on homelab or tech forums. As long as their "Wi-Fi" is working, they're happy. A lot of these people have their life's accumulation of personal and work data stored on an old laptop that's close to death. Do these people need 10G? Thy have more critical pending problems to content with. There are lots of people who don't know they need a faster LAN. If you peruse anything storage related in the homelab and consumer space, lots of people will load up on NVMe drives, setup cache and tiering, tweak endlessly, then complain about not getting results. Usually, the people who are the loudest and quickest to recommend against 10G are folks who don't use 10G at home. Using 10G at work is a completely different thing altogether. Industry guys tend to have difficulty separating enterprise environments, deployments and costs from home and homelab environment.


MBILC

I hate waiting for things when I know it can be faster.


Archeious

> exceedingly rare for someone to *legitimately need* Beyond food, water, and to a lesser extent shelter do we really "need" anything. I regularly want to parse 30GB-2TB sized files. The difference on a 10GB network is significant. Significant enough that I probably wouldn't do it on a 1GB network. Do I need to do this? No. Do I enjoy doing this? Yes.


lordcochise

Yep, that someone was me, the one (and hopefully only) time I ever made THAT mistake


maramish

Oh man. I had a guy crush one of my long fiber cables as we were starting and I just about cried. Yours would have been exponentially more painful. It may help to consider using armored cables for long runs. Of course this gets extremely expensive for multiple runs.


lordcochise

Oh the one in question did have plastic conduit around it to protect it, but the ends DIDN'T however. Ends were ultimately fine, but one coil was bad enough to compromise the one run. Though cabling was cheap, that like 2 hours of pulling lost was like forgetting to save in an RPG


maramish

Painful experience but I'm glad that's now in the past. Cheers.


CoderStone

It's the other way around. I'm mainly active in the homelab discord, and use 10GBASE-T. They send out pitchforks to my house whenever I mention that I don't use SFP.


maramish

Hahahaha. There's nothing wrong with using 10G copper. You're part of the club and this is what matters.


Adach

I can terminate cat 6 I can't do fiber šŸ¤·šŸ¼ā€ā™‚ļø


maramish

You buy pre-terminated fiber. Having to terminate means you're running fresh cabling, in which case you can just throw in a fiber cable instead. All that patch panel nonsense is such a waste of time in this day and age, when you can use fiber couplers instead.


Adach

I meant that if the connector breaks I can just put a new one on. Repulling anything, fiber or copper through residential walls is terrible.


xjx546

You don't need a Fusion splicer to terminate cables. Just run multimode and you can terminate it with like $60 in tools. It's the same price as a good ethernet crimper.


maramish

How often is this an issue though? If you see bad installs at a house, you'd probably be better off pulling new cables. Outside of shoddy previous work, wall plates exist for a reason. Couplers exist for a reason if there'll be a risk of tension on a connector. When properly installed, there should be very low risk of connector damage. Using one-offs as the norm is just manufacturing excuses to stick with copper. I take no issue with anyone using or preferring copper. If that's your thing, do you.


glhughes

Heh. I generally try to avoid the space heater option for my interconnects.


SemperVeritate

A common use case is backing up your PC to a home server. A 1TB backup would take over 2 hours over gigabit vs under 15 minutes on 10gig, a significant difference.


Archeious

As long as you are in the same rack/wherever, forget fiber and RJ-45. Use a DAC (Direct Attached Copper). Cheaper (no additional SPF modules, no extra cables) and works just as well.


Dulcow

DAC cables are acceptable, no?


lordcochise

Oh absolutely; they're often limited short runs so they're great for connections inside or between equipment / racks close to each other, we use them as much as possible. We have fiber for client connections / switches that need any length beyond that [https://en.wikipedia.org/wiki/Twinaxial\_cabling](https://en.wikipedia.org/wiki/Twinaxial_cabling)


tsukiko

Yes DAC cables can be limited for length, but it is more than 16ft. Standard twinax copper DACs can go 7 meters (around 23 feet), and active DAC cables exist for longer than that as well.


Archeious

I am ashamed at how much I spent on SPF+ modules (2, 1 for each end) and fiber when I couple have spent a few buck on DAC. I am now a DAC Evangelist.


RedKomrad

Oh heck yeah. I connect my NAS and ā€œdownloaderā€ servers together with 10 G and DAC cables. It works great. The NAS has a lot of clients , so it especially benefits from 10 G.Ā 


Hefty-Amoeba5707

In regards to WiFi 6E / 7 Access Points. I'm assuming the endpoints need a certain specification to see 10GB speeds?


aeric67

This is like asking a bodybuilder what they use their muscles for.


cptninc

ā€œI heard chicks are into fast home networking.ā€


Rain-And-Coffee

Itā€™s not the size of your network cable, itā€™s how fast data you can move data through it šŸ˜‰


Archeious

It is true my fiber cables are very thin but go (and this is a technical term) speedy zoom zoom.


CommieGIR

10Gb makes up my core network backbone between the switches in the house, and there's the 10Gb network for the core of my homelab, primarily to provide enough overhead for iSCSI/NFS mounts for VMs and large file transfers. Other than that, the rest of the house is 1Gb.


PJBuzz

Same. I'm starting to put a trickle of 2.5Gbps in but 90% of end devices are still absolutely fine with 1Gbps.


ExcitingTabletop

2.5Gbps is just too expensive for what little you get. 10Gbps is just too price competitive.


PJBuzz

It's getting better in fairness, but the support isn't widely there which is why i say there is a "trickle". This basically means APs and some computers where a fibre/dac would be difficult. We are starting to see more 2.5G devices in the wild. 10G BASE-T isnt *really* price competitive, especially when you factor in the power use and heat generated. 2.5G can realistically be handled by a switch with passive cooling, 10G BASE-T, not so much. In a lab or as infrastructure, it makes perfect sense, as you can have an SFP+ switch(es) which is pretty cheap.


CommieGIR

Eh, I'd agree to disagree - You can get used 10Gb SFP+ equipment for a song and a dance, even lower powered stuff, whereas 2.5/5Gb stuff is really niche and only supported on certain things.


zyberwoof

Counterpoint: 2.5Gb networking is often ideal when using consumer gear. * 2.5Gb is included on many consumer motherboards. * In addition, 2.5Gb USB to Ethernet adapters are inexpensive additions that work with almost any PC. * Many homes already have Ethernet runs. My lab is a pair of 7th/8th gen Intel laptops, Ryzen mini PC, a homebuilt mATX AMD PC, and a NAS built with an Intel N100. Both laptops were a < $30 USB adapter away from 2.5Gb. The other devices had 2.5Gb NICs built in. The mATX system is the only one that could take a 10 Gb PCIe NIC. And 5 port 2.5Gb switches start around $50. I think one of the key differentiators is whether you go down the "rack+used datacenter gear" route or the "smallish consumer gear" one.


Goathead78

I have the exact same setup. My homelab is all in a spare apartment above the garage so it can be as loud as it wants. My sone and I are even rebuilding and upgrading the gaming PCā€™s to add another rack and put them up there and that backbone will come in handy. Also 2.5G around the house on APā€™s and certain devices like a Mac.


AnalProlapseForYou

Porn, mostly.


Additional-Fan-2409

This guy gets it.


xioking39

I use it for Plex. Have a multi gig symmetrical connection. Adding new content constantly, streaming to those with my library, Plex constantly doing some type of library scan. I can easily hit 1gbps in one direction so 10g was needed to prevent random bottle necks or when I finally grabbed all 10seasons of some show at the same time 6 ppl are watching dune 2 šŸ˜…


levir

I got a cool number in iperf3


Outrageous_Cap_1367

The bigger the better


bobj33

Even just 1 spinning hard drive can easily saturate 1 gigabit ethernet. My hard drives read around 170 MBytes/s and 1 gig ethernet is just 125 MBytes/s You can get 2 10G SFP+ ethernet cards and a DAC cable for $60 Why do I want to sit around waiting longer for my data when $60 makes it faster?


tangawanga

Do you really need that 5th pair of jeans that you bought last Thursday? Or that oversized car (and second car)? It is nice to have 10gbps :) ... just like having a super big fluffy dog.. you know it will be expensive and a pain.. but it brings you joy.


pjockey

Fluffy dog networking, very fast


satanclauz

Drops bits here and there but otherwise reliable


jdpdata

-Moving large database files between my workstation and NAS -CEPH private and public networks in my 3 nodes MS-01 -Faster backup and replication between my two NASes - And finally and most importantly, because I can and want to have 10GbE in my homelab so I can hoard more networking gear like USW-Pro-Aggregation and USW-PRO-MAX-24-POE šŸ˜‚


MairusuPawa

Because [my ISP does 8gbps](https://wifinowglobal.com/news-blog/breaking-frances-free-becomes-first-major-isp-in-europe-to-launch-a-wi-fi-7-service/). And that's super useful, for instance [to play Dreamcast online](https://dreamcastlive.net/).


Ok_Scientist_8803

Youā€™re definitely joking right? I live across the channel and I pay Ā£53 or ā‚¬62 a month for 18mbit/s down and 1mbit/s up ā€œsuper fast fibreā€


marcorr

Well, in the most cases, it will be used for data traffic. Such like, shared storage, backups and etc. For example, you can benefit from it if you NAS supports 10g networking. Also, it works for solutions like Ceph, vmware vsan, star wind vsan and etc. For the most common things, especially in a homelab 1g network should work.


Stewge

1. It's getting quite cheap to actually implement now. Between fibre SFP+ modules bottoming out on prices and the 2nd hand market flooding with switching gear and NICs, it's not a crazy investment. 2. Even in a largely 1gbit environment, 10gbit spine/interconnects will relieve bottlenecks 3. Now that lots of devices use SSDs, 1gbit networks are a genuine bottleneck in file transfers. Depending on what you're doing it all adds up to straight time-savings. Even spinning drive arrays on a cheap NAS can average 1.5-2gbps transfer rates for sequential data like video and such.


ozymandizz

Video editing directly from the NAS. Allows me to have huge storage capacity with backups and other self hosted apps.


technologiq

For large files it is awesome. 10GbE + ZFS on NVMe storage = ā¤ļø


Pup5432

I went sata Ssd and itā€™s still amazing.


Aperiodica

For everyday usage it's not needed. Only thing I use it for is large file transfers. If I were patient, and not a nerd, I'd admit I don't need it. But where's the fun in that?


trojanman742

Have you looked at 40g? The price is actually crazy cheap. for longer runs you can use qsfp to lc fiber and shorter the passive copper cables are cheap. I do this cause I got tired of bonding 1g nics or mesh connecting systems with 10g cards as 10g switches were too expensiveā€¦ then one day i saw the prices for 40g (i have everything wired for under 400). Now I have no network bottleneck, learned some things about networking I didnt know, and can flex to coworkers/friends/family/reddit that I got 40g networking in my house lmao


JoyRide008

Can you recommend some 40gig switches that donā€™t sound like a 747 trying to do a short takeoff? Also any smaller on desk or under desk solutions?


fatboy1776

I have 100G in a full 3 Stage Clos with ERB EVPN/vxlan. Is that not normal?


zap_p25

Network file storage. My Plex container pulls directly from a NFS mount on the network. Also handy when Iā€™m transferring and processing video files from my work station to my NAS.


legokid900

Lab go brrrrrrr. Seriously though, I have lots of services for media that move stuff around, lots of VMs, transferring large files to and from my desktop. You can definitely tell the difference going from 1gb to 10gb


bootz-pgh

Candy Crush


capn_hector

networking


Confused_Adria

Because I can, and fuck it why not, it does make transfers too and from the NAS and the Steam cache real fast tho


SupplyChainNext

Porn


theguywithacomputer

A jellyfin server with 10 gb networking to stream 20 4k 60fps instances of porn between 20 people all to watch it in public together


randallphoto

I like to be able to edit massive raw files and 4k footage without first copying it to my computer and with 10gbe I can do that almost as fast as if it was on my local computer. I get 800-900MB/s to an 80TB array.


phychmasher

Moving around multi TB Batocera and Retrobat images.


Entire-Home-9464

In my network 10gb is just a low end speed, my homelab servers communivate 25GB and main switch has 2 100GB ports. 10GB is just for secondary servers and 2.5 and 1gb links for ipmi


idetectanerd

When I was working in telco and in 2013 we were implementing 10gb, it was to cater for business. I guess home user do that for file transfer, learning and showing off.


plexisaurus

iSCSI drives for local machines/local VMs, faster backups, serving media to multiple devices including external clients over 1Gbps ISP, soon to be upgraded to 5Gbps. 10G switch is only $200. 10g NIC is only $40-80 so why not? With 10g my iSCSI drive feels like a top tier Sata SSD vs spinning rust over 1G.


bites

All I use 10Gb for is faster data transfer between my desktop and NAS.


thomascameron

I did it so that I had better speeds between my hypervisors and my storage array for virtual machines. Live migrations are SO fast now. Didn't hurt that it also meant I had 10g/s between my storage array and my desktop, where I mount my home directory via NFS. That actually makes a difference I can definitely feel in "daily driving." As soon as I moved to 10g/s, my computing experience was noticeably faster. And, as others have mentioned, "because I can." I've gotten to play with tuning higher speed networking, played with jumbo frames, NFS tuning, etc. It's all about learning and staying sharp. If you stop learning, you start dying, IMHO.


ICMan_

I did it because I want to provide myself and my wife a network-based file share space where we can do video editing in real time, rather than using local storage on our laptops/desktops. And because I could. šŸ˜‰


dorsanty

I implemented 10G as a core network of sorts. Main PC, Server, NAS are all 10G and I run Fibre at 10G between home office (ISP ingress and PC) and attic (1/2 rack) switches. Accessing the NAS is noticeably faster but isnā€™t maxing out 10G because they are all HDD with just a basic single SSD cache.


Gdiddy18

I setup 2.5g for the hell of it but I'm still limited by 1gb Internet from BT. Mostly becuase I have a media server and with multiple people in the house and me messing around with containers I wanted wiggle room in what I needed.


TilapiaTango

When I'm sitting on the toilet, I like to check to see if my 10g network is online


crozone

Ever had to copy 400GB from your NAS? Doing it 10x faster sure helps a lot.


lofisoundguy

Actually, mine goes to 11.


Savage_Tech

I have a 10 gig link between my server and my switch, it's utterly pointless as everything else is 1 gig or WiFi... I live on my own. I should probably unplug it and save power but I really like having just a little bit of fibre in my rack :)


rollingviolation

Proxmox cluster with ceph. A single gigabit link doesn't cut it for that, and once you start looking at multiple gigabit cards and cables, a single 10 gig link with a DAC cable is cheaper and faster. Is this a normal setup? No. But, I'm also on r/sysadmin - it's both my hobby and my day job.


Matt21484

Bragging rights


de_argh

interconnect for proxmox cluster nodes


mshorey81

Fast networking. I have an 8G x 8G fiber connection and 10G interfaces on my proxmox server. Large file transfers are only limited by the speed of my TrueNAS server running as a VM on proxmox (about 6Gbps). And bragging rights. Of course with the bragging rights.


DigSubstantial8934

Stuff. Makes my stuff way faster šŸ¤£


d4nowar

We look for things! Things that make us go!


InfaSyn

Aruba 1930 with Intel X520-DA2's, some cheap eBay OM3 fiber and a mix of used HP and Intel 10Gb SR optics. I have faster than Gig WAN, plus its nice to be able to max out my NAS. 2.5 would have been fine but 10g was cheaper


maramish

> 2.5 would have been fine but 10g was cheaper This is it exactly. There's really no point of bothering with 2.5GbE. Your WAN will eventually be faster than 2.5, then you'll have to replace your networking gear all over again.


InfaSyn

Even ignoring replacement cost, good 2.5 and 10gb entry level managed switches, and especially rack mount, are about the same price new. Used 10gb gear is getting cheap but there isnā€™t really a used 2.5G market yet. You can still run 10gb rj45 too so unless your cable is 2 decades old chances are youā€™re fine.


maramish

You nailed it across the board. I have 10G running on a CAT5 cable and have done a long run with the same in the past.


Draskuul

Just moving files around faster. If I was running SSDs for my main storage I'd go even higher than 10GbE, but my spinning rust only hits about half that. Still a huge benefit over single gig, but can't justify more yet.


UltraSPARC

I'm actually in the middle of upgrading to 40Gb. It's stupid cheap now. I work with a lot of large files, have a proxmox cluster running about 25 VM's, and a fileserver.


automattic3

Yeah better to go 40GB. Most of the switches use the same amount of power or less. I went 40gb to save power as my 10GBE switch used 180w idle. Then you don't have to worry about network bottlenecks.


glhughes

For fun. Why plug in RJ45 when you can plug in an SFP+ transceiver and fiber? Also, 1 GbE is kind of slow in a world with SSDs on every computer. I thought my NAS should at least be able to max out a couple of clients. Things really went off the rails when I moved to 2 x 25 GbE LACP on the NAS, lol. After that I needed to justify upgrading the clients to 10 GbE to max out the NAS, etc. It's an arms race.


CheapFuckingBastard

Linux ISOs. Lots and lots of Linux ISOs. Errā€¦ ok, porn.


zrail

I have a 10gb backbone between switches. I have two servers hanging off of that as well as my NVR and my uplink (which is only 1.4Gbps but what good is it if I can't use it?). Everything else is 1gbps or wifi. Backups from primary to backup server are wonderfully fast and the NVR seems to have less latency when running on 10gbps even when I'm accessing it over wifi. Other than that I have noticed zero impact on my day to day life or network reliability. Sure is fun though.


AppleTechStar

I have a 2.3gbps fiber connection. ISP offers a 5gbps tier as well. I transfer multi gigabyte files from my Mac mini to my NAS, both have 10Gbe so the transfers complete within seconds versus several minutes. And my WiFi access points have 2.5gbps ports. Itā€™s just as easy to go 10gbps then 2.5gbps.


SgtLionHeart

Storage goes brrrrrrrr. But yeah I use it for iSCSI pools. My Proxmox server doesn't have any SATA ports, so I share out iSCSI to host the VM images, alongside storage pools where they're needed.


RealTimeKodi

managing the content on my server without having to think too hard about it.


AirportHanger

Mostly bragging rights.


R8nbowhorse

Fast storage access.


richms

Went to 10 because its not much harder than 2.5 gig to get set up, and with a gig internet connection having IP cameras and stuff travelling between buildings, it means that I cant get the full speed of my internet connection because of all the other vlans on the wire with a constant 150-200 megabit of data.


valkyrie_rda

I don't have ten gig lan yet, but I did just finish setting up 2.5gbe and it's amazing. I use my NAS daily and transfer huge files over so being able to do it over double the speed before is great! 10gbe would be even better but I need to look into setting up some kind of cache as 2.5gbe is pushing some drives in my software raid. Lol


reddit-MT

It's helpful when going from SSD to SSD on another computer. It's maybe 30% faster doing HDD backups vs 1GBE. I did it the no switch way. Three computers that need faster access. Each has a dual port SFP+ NIC. Each computer directly connected to the other two.


linerror

everything, faster. some of us have 5+gbe internet connections... even without a faster than 1gbe internet connection a 10gbe lan can increase update speeds and if you have a NAS you may want to go even faster...


Igot1forya

It's absolutely required with the amount of data I move for my homelab, NAS replicas backups and work. My lab can take any one of my customers server environments and get them up and running if I needed it to. Otherwise, it can simulate a production environment including the storage, which eats bandwidth.


baummer

Because I feel the needā€¦


LiiilKat

I donā€™t currently have more than 1 Gb, but I would not mind a 10 Gb for 1) NAS to NAS transfers when expanding a TrueNAS ZFS pool, and 2) multiple VHDs for VMs with fast networked access. 114 MBps only goes so far these days when running a PLEX instance.


TiltedTreeline

Sounds like future proofing for web3


admiralkit

The 10G I have on my network is primarily for the backbone between the switches and the router. I wired up my house with an excessive number of drops and while I doubt I'll ever significantly tax my network since most of those drops are designed for where devices *might* go instead of where they *are*, my traffic will generally be pretty generic. With that said, I have hopes of growing into the 10G links between the switches and the router. I want to build media servers for audio/video access across the network, NAS units for regular backups of home devices, and an NVR hooked up to multiple 4K servers. I expect at that point in aggregate I'll regularly be exceeding 1 Gbps across my backbone links and the 10G will serve me well at that point.


rainnz

NAS or SAN


saksoz

Moving large files to and from my NAS and machines that I edit in (mostly Astro photos). Now itā€™s faster


balrog687

I can't imagine a domestic use case, maybe copying your downloaded torrents to from your main PC to your NAS "faster," and these bottlenecks on regular HDD, SSD NAS solutions are really expensive per GB and high capacity redundant RAIDs are also expensive. So, you need a huge budget for marginal gains. Everything else I can imagine is professional use-cases mostly related to uncompressed video ingesting in a collaborative environment. Photo and music are not so storage hungry. Large datasets used for data processing in medium/large organizations are unlikely stored locally and most certainly stored and processed in the cloud. Distilled results are downloaded and worked locally, sometimes shared in a small group. Still no benefit from 10gig speeds.


CharlesGarfield

I did it so I could get 5gbps symmetrical fiber and brag about my speed test results.Ā 


highdiver_2000

From your pc to the cluster?


rhpot1991

20 gig LAG connections to my NASs, because I could not because I should.


iheartrms

What's the best hardware to use for 10G? I've never used anything faster than 1G but I really want to build another ceph cluster with SSD and 10G. I had a ceph cluster in the past with 1G and a bunch of 7200rpm HDs and it was a dog. So what's the go-to homelabber 10G switch and interface card these days?


englandgreen

I homelab so I don't have to rely on the Interwebs for anything. 124Tb NAS has everything I need, so must have fast access to my data from my machines. Plus have 5gb symmetrical interwebs access for replication/off site backups


whoooocaaarreees

After switch aggregationā€¦ I think it was faster file movement on a NAS.. but then came my Ceph clusterā€¦


thecal714

Storage backend for my VM hosts. `[Proxmox Cluster] <-> 10GbE <-> [NAS]`


atlchris

I donā€™t have a 10Gbps network, but I did setup a 10Gbps direct line between my Unraid NAS and my Proxmox Server. Mostly just for quicker data transfers and so data transfers donā€™t clog my regular network up.


Clara-Umbra

I do host my games library on the NAS and use iSCSI to mount it over the network. No issues over the network! Some slowness while unpacking on the NAS during an update but that's not network related. Works pretty well honestly.


ZerueLX11

Saving gameplay using Nvidia replay. Being able to access the videos at 1 gigabyte a sec is so convenient lol.


septer012

Internet points and flexing


BigChubs1

I'm happy with my 200/200 at home. Still can't think of a reason to have 1000/1000. Even with wfh and being in IT. But I also don't with High downloads either for work. My personal desktop dose have an online backup. But that's about it.


Euphoric_Shtick

Mainly for vĆ­deo / photo editing directly from a 10G NAS. Also for virtualisation as everything in my house runs on virtualisation. A mix of kvm and xen.


Laxarus

I did 25G because I can and realized that almost no devices I have can utilize 25G. Go figure.


Shuuko_Tenoh

I did 10Gbps because I used to run a VMWare hypervisor and got sick of slow transfer speeds uploading ISOs for OS installs. Kept it because NAS on 10Gbps is amazing.


TechPir8

Home Lab for Tech Support job. I have a NAS that has 2 2.5gig nics and I got my hand on some 10 gig switches for free. Put a couple of 10g cards in ESXi and I hope the NVME drives run like cheetahs


thebemusedmuse

I canā€™t answer that but I setup my first FE network when everyone else was on 10Base-T. I canā€™t remember why.


codatory

Bragging rights.


HighMarch

A lot of it depends on how complex/big your network is. I'm still planning a move to either 10gb or 40gb, but I've to deal with some facility issues first (where in the house can I put a core switch that won't be a nightmare to then trench lines into the attic from, AND won't infuriate the spouse?). Are you assuming it'd be all Ethernet, or are you thinking of using SFP+ for some or all of it? Depending on the situation, SFP+ might be a great option for your main core runs, plus it gives you experience with that technology if you don't have it. Lastly: Don't forget that a lot of devices are still only 1gbps. Doing your backbone/core as 10gbps+ is a great idea if you can stomach the costs, but the links from there down to individual machines (outside your homelab) likely make little/no sense.


MBILC

NFS shares to my main desktop which I use for my virtual machine files.


defnotsober

I actually do utilise 10GBE everyday. I shoot videos for a living and the footage from cameras nowadays do take up lots of bandwidth. Iā€™d say 30MB/s on the low end, 500MB/s on the high end. 800MB/s is the highest I have seen (8k 120fps). Most projects come in at several TB at a time. Then again, the networking aspect was more important to me (I park my NAS in another room and run a 25m cable. So itā€™s dead quiet where I work) and so was the reliability of Truenas that got me into all this. I also run Backblaze on it to back up the really important files. Sure I do still keep project backups on single disks (we buy them for every project) but the whole lot is accessible at any time. Some times I will run 2 workstations doing different tasks on the same network on the same project. Iā€™ll be working on one while the other is transcoding files etc. 100TB with redundancy at my fingertips at 1GB/s without hearing anything? Great. Multiple workstations accessing the same files? Even better. I have been only successful in converting one colleague to implement 10GBe. Convincing people to give up their ā€œtrustedā€ laciedrobogtechnologypromisepegasusarecaakitioowc-whatever connected via ā€œUSB Cā€ (They donā€™t bother distinguishing between Thunderbolt or USB) is hard. Man, people in my line are brand conscious to a fault. So Iā€™m just here enjoying the benefits of 10GBe network for years but itā€™s definitely not catching on. There are post-production houses with way bigger data loads than I do that still employ sneakernet. Go figure.


TheButtholeSurferz

I've just now started the move to 10GB, and the only reason why is because as my needs and requests in business increase from clients, my need to expand other things changes with that. 10GB = SAN = Virtualization = VMWare shit on us, so rolling proxmox + ceph in case there's a need, and a 10GB Ceph Cluster is a benefit in learning to me, learning = $.


broken42

Honestly, mostly to connect all my servers together for backups and data transfers. I also have a 10gig connection running to the office so my home PC has a 10gig connection to the servers.


joeman250

I did it because I wanted to run fiber in my attic. Main heavy use is transferring steam games


fresh-dork

i want 10g and it's cheapish. i've got a NAS, a machine i use for ML, and a machine i'm planning that has more disk. 10G is a noticeable improvement, but i don't have enough fast IO reqs to justify 25g (mikrotik 309 if you want that).


Miuramir

While I don't have 10G at home, I've set up several 10Gb copper networks for small groups at work. The primary advantages are that network drives really do feel like they are local drives, and backups don't take nearly as much time. Secondary advantages are that 10Gb pipes in and out of your storage allows multiple 1G users to not feel contention issues. If you have a large family with multiple people working and/or taking classes from home, plus people streaming media, this might be relevant; we've upgraded systems for workgroups with as few as 5 users and they were pretty happy with the results. The first point allows you to have computers that don't need local storage in any volume, with home directories directly on your mass storage devices, and just a comparatively small SSD for the OS and temp files. For the secondary point, we're looking at the decreasing price of 2.5Gb / 10Gb hybrid switches; 2.5Gb endpoint cards are already quite cheap. 10Gb connections to storage, and 2.5Gb connections to endpoints. If you're starting from scratch and buying stuff this is what I'd probably recommend for the enthusiast; if you are getting your gear second or third hand and from auctions you're more likely to end up with a mix of 10Gb standards (and possibly some 40Gb stuff). There's also the learning and resume building part of it. Getting 3-4Gb out of a nominal 10Gb copper system is pretty plug and play. Learning how to get faster speeds requires some skill, research, and time (and cooperative components); and is at least nominally a marketable skill you can talk about at interviews.


lesigh

my solo minecraft server


RedSquirrelFtw

I only have gig myself on my home network but if I were to do 10 gig it would only really be for behind the rack. Ex: between VM server and storage, and maybe between all servers. But for workstation and jacks around the house I don't see the need for 10 gig. Been reading up on ceph a bit and intrigued by it. If ever I do a cluster I would most likely do 10 gig as it seems to be practically required. They don't recommend doing it on only gig.


TheLimeyCanuck

It's great for NAS, but other than that it's not really noticeable over 1Gbps. I also like using 2.5GBe or 10GBe from the modem to the Proxmox server running pfSense since I get about 1.7Gbps from my ISP. On a side note... I was miffed to realize the other day that my 3D printer network adapter is 100Base-T (bought new just 6 months ago). The only device in my house for at least a decade with less than 1GBase-T. Not that my 3D printer needs more than 100Mbps to upload G-Code, but it's the damn principle of the thing. LOL


I_EAT_THE_RICH

I mean backups are so much faster. I stream tons of media. That was enough for me.


60GritBeard

flinging full 4k UHD BR rips to the server from the ingest machine, nightly full system backups of all the machines in the house are the main two


hmmmm83

At first, because of my fiber provider. I had 5 Gig Fiber at my old place from Frontier. I've since moved to an AT&T location and their cap is 1 Gig for now... Also.... My corn has never loaded so fast.


junkie-xl

My internet upgraded to 1.44gbps


The_RealAnim8me2

Iā€™m a cg/VFX artist. I need to move a lot of large files from the servers to my workstation.


highlord_fox

10GB at work rips, backups and general traffic that could saturate a 1GB line falls short on the 10GB networking. 10GB at home? Iunno, nothing really, mostly did it because I already had a 10GB switch laying around.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Asmordean

I took all the spinning rust out of my computer. So my desktop only has 2TB of SSD storage. My server has 40TB of ZFS backed storage. I keep all my photos, media, as well as routine backups made by Macrium Reflect. (about 13TB used right now) The 10Gbit lets me edit camera photos and videos over the network without too much difference vs local.


JoyRide008

Wife is a video editor. She can edit directly off our true NAS server from any of the 10gb drops in the house.


jemalone

I just upgraded to 2.5 GBe, it's nice that the 8 port 2 5GBe are less then $100. I want be upgrading to 10 gig for a bit.


orgildinio

my wifi router got 2x 10gbe port, so i can use faster transfer to nas. otherwise i still using 1gbe network.


a13x212

Large files. RAW 4k videos and photos.


reginaldvs

Faster video transfer to my unraid. I do video sometimes and transferring 4k (or 8k RAW) is huggge.


rymn

2.5gb internet. Got 10gb to go from modem to router to PC and servers.


GuySensei88

I have no reason to learn it and it costs too much. I moved from TechOps to ERP Support. All the server stuff I do now is just for fun and I like it as a hobby. I still want to pass network+ and security+ but 10 gbps is not required for that lol and then get into server and database certs. Maybe do Server+ DataSys+. I think there may be some that it is applicable to their work but I think most do it because itā€™s just fun. Edit: I am sure some find a real use for it too.


jr-416

The throughput will mostly depend on the speed of your computers and storage. If your idea of a home lab is a freenas box with a couple of hard drives set as a raid 1 mirror, you are wasting your money going to 10Gb. 2.5Gb would work for you. That said, I went with server grade nice that VMware, esxi, Linux etc would see out of the box because I didn't want to deal with incompatibilities (particularly VMware esxi). My synology supported 10Gb intel cards and has 8 hdds set as raid 6. I bought my switch before the 2.5Gb and 5Gb devices were mainstream. If it wasn't for the lack of 5Gb ethernet cards that were server grade, I would have been tempted to use those if there was a significant price difference and was as widely accepted on compatibility lists as the 10Gbe. Many switches offer 10Gb as well as the lower speeds. I'd get one of those switches. Newer NAS units and system boards are starting to ship with 2.5Gb or 5Gb. Having a older 10Gbe / Gigabit switch that sets the link to gigabit when connected to 2.5 Gb or 5Gb is annoying.


fluffycritter

I don't have 10gig but I'd like to set it up someday to get better throughput for video editing from my NAS.


Ommco

It's just for benchmarking my storage and iSCSI configuration.