Stop Blaming Your Storage — Start Tuning Your Workstations
1. Windows + SMB: Tune the Client, Not Just the Server
Windows defaults are notoriously conservative when it comes to networking.
If you’re working with SMB over 25 or 40 GbE, here are a few tuning tips:
2. macOS + NFS or SMB: Apple-Friendly, But Not Always Fast
Macs are often left at their defaults — which aren’t tuned for high-performance media workflows:
3. Linux + NFS: The Secret Weapon for High-Speed Workflows
When it comes to raw, tunable performance in high-speed storage environments, Linux and NFS are hard to beat, if you know where to look. I’ve worked on configurations where a Linux box on a 25 or 40 GbE network moved data faster than a Mac or Windows workstation on the same switch and storage… just because of better tuning.
Client-Side Mount Options That Matter
Here are the baseline NFS mount options I typically use:
bash
nfsvers=3,rsize=1048576,wsize=1048576,noatime,nolock,hard,intr
Network Stack & Socket Buffer Tuning (Per NIC)
For 25 GbE and above, your network stack matters as much as your NFS config.
Run these with sysctl -w or put them into /etc/sysctl.conf:
bash
net.core.rmem_max = 67108864 net.core.wmem_max = 67108864 net.ipv4.tcp_rmem = 4096 87380 67108864
net.ipv4.tcp_wmem = 4096 65536 67108864
net.core.netdev_max_backlog = 250000
On 40 GbE or 100 GbE, you may want to push these values even higher (up to 134217728 or 128MB), but test carefully. NIC offloads (like GSO/TSO) and interrupt coalescing settings may also need adjustment with tools like ethtool.
NFS Server-Side: nfsd Threads & Async Writes
If you control the NFS server, the number of threads that handle NFS requests (nfsd) is a huge tuning factor.
Check current threads:
bash
cat /proc/net/rpc/nfsd
Increase to, say, 64 or 128:
bash
rpc.nfsd 128
Add this to your system’s startup scripts or NFS service configuration (/etc/nfs.conf on newer distros).
Recommended by LinkedIn
Rule of thumb:
Also verify async is enabled in your /etc/exports file:
bash
/export/media *(rw,sync,no_root_squash,no_subtree_check)
Change sync to async only if your workload tolerates slight risk of write loss (most media workflows do). It can double write performance in some tests.
Advanced: CPU Affinity, IRQ Balancing, and NUMA Awareness
On high-core systems or multi-CPU setups:
Example: 40 GbE Optimized NFS Client Mount Line
bash
sudo mount -t nfs -o nfsvers=3,rsize=1048576,wsize=1048576,hard,intr,nolock,noatime 192.168.100.10:/mnt/volume /mnt/nfs
And make sure the NIC has:
bash
ethtool -G eth0 rx 4096 tx 4096
ethtool -K eth0 gro on gso on tso on
Summary for High-Speed NFS on Linux:
If you're running a Linux-based post-production environment and pushing high-speed networking, take the time to tune both the client and server. It's not always about the storage hardware sometimes, it's about giving your software the elbow room it needs to run full throttle.
Final Thoughts
The point here isn’t that one OS or protocol is better than another it’s that no storage system performs well without a properly tuned client. And yet, client tuning is rarely discussed in storage vendor documentation or deployment plans.
If you’re building a high-speed network, make sure your workstations are part of the performance equation. Often, a few minutes of configuration change on the client side can unlock terabytes of throughput and eliminate hours of troubleshooting.
Have you tuned your system for high-speed NFS or SMB workflows? I’d love to hear what worked (or didn’t) in your environment.
#PostProduction #VFXTech #MediaWorkflows #StoragePerformance #NetworkTuning #HighSpeedNetworking #25GbE #40GbE #100GbE #NFS #SMB #LinuxPerformance #MacNetworking #WindowsSMB #WorkstationOptimization