Stop Blaming Your Storage — Start Tuning Your Workstations

Stop Blaming Your Storage — Start Tuning Your Workstations

1. Windows + SMB: Tune the Client, Not Just the Server

Windows defaults are notoriously conservative when it comes to networking.

If you’re working with SMB over 25 or 40 GbE, here are a few tuning tips:

  • Enable SMB Multichannel: If your NIC supports RSS or multiple queues, SMB Multichannel can dramatically improve performance by allowing multiple connections.
  • Disable Large Send Offload (LSO) and Receive Segment Coalescing (RSC): On some NICs, these features actually hurt performance for high-speed file transfer.
  • Tune TCP Window Size: Use netsh int tcp set global autotuninglevel=experimental for advanced throughput testing.
  • Consider MTU size: Jumbo Frames (MTU 9000) can help but only if all devices in the path are correctly configured.


2. macOS + NFS or SMB: Apple-Friendly, But Not Always Fast

Macs are often left at their defaults — which aren’t tuned for high-performance media workflows:

  • Force NFS version 3 (if your server supports it). NFSv4 adds security but introduces latency.
  • Use nolocks, async, and noatime mount options when working with large image sequences or uncompressed video, you'll see a noticeable speed-up.
  • For SMB, disable signing for trusted networks by editing /etc/nsmb.conf and setting:


3. Linux + NFS: The Secret Weapon for High-Speed Workflows

When it comes to raw, tunable performance in high-speed storage environments, Linux and NFS are hard to beat, if you know where to look. I’ve worked on configurations where a Linux box on a 25 or 40 GbE network moved data faster than a Mac or Windows workstation on the same switch and storage… just because of better tuning.

Client-Side Mount Options That Matter

Here are the baseline NFS mount options I typically use:

bash
nfsvers=3,rsize=1048576,wsize=1048576,noatime,nolock,hard,intr        

  • nfsvers=3: Offers better performance than NFSv4 in many real-world media workflows.
  • rsize/wsize: Try 1048576 (1MB) for high-throughput environments. You may need to test lower values on older NICs or drivers.
  • noatime: Disables file access time updates — huge performance win for read-heavy workloads like playback.
  • nolock: Avoids unnecessary overhead in stateless file access cases.
  • hard,intr: Ensures stability but allows user to interrupt stalled mounts.

Network Stack & Socket Buffer Tuning (Per NIC)

For 25 GbE and above, your network stack matters as much as your NFS config.

Run these with sysctl -w or put them into /etc/sysctl.conf:

bash
net.core.rmem_max = 67108864 net.core.wmem_max = 67108864 net.ipv4.tcp_rmem = 4096 87380 67108864 
net.ipv4.tcp_wmem = 4096 65536 67108864 
net.core.netdev_max_backlog = 250000        

On 40 GbE or 100 GbE, you may want to push these values even higher (up to 134217728 or 128MB), but test carefully. NIC offloads (like GSO/TSO) and interrupt coalescing settings may also need adjustment with tools like ethtool.

NFS Server-Side: nfsd Threads & Async Writes

If you control the NFS server, the number of threads that handle NFS requests (nfsd) is a huge tuning factor.

Check current threads:

bash
cat /proc/net/rpc/nfsd        

Increase to, say, 64 or 128:

bash
rpc.nfsd 128        

Add this to your system’s startup scripts or NFS service configuration (/etc/nfs.conf on newer distros).

Rule of thumb:

  • 25 GbE: 32-64 threads
  • 40 GbE: 64-96 threads
  • 100 GbE: 96-128+ threads

Also verify async is enabled in your /etc/exports file:

bash
/export/media *(rw,sync,no_root_squash,no_subtree_check)        

Change sync to async only if your workload tolerates slight risk of write loss (most media workflows do). It can double write performance in some tests.

Advanced: CPU Affinity, IRQ Balancing, and NUMA Awareness

On high-core systems or multi-CPU setups:

  • Pin NFS and NIC interrupts to specific CPUs using irqbalance or manual taskset.
  • Balance workloads across NUMA zones.
  • Use nfsstat -c and iostat -xm 1 to monitor bottlenecks live.


Example: 40 GbE Optimized NFS Client Mount Line

bash
sudo mount -t nfs -o nfsvers=3,rsize=1048576,wsize=1048576,hard,intr,nolock,noatime 192.168.100.10:/mnt/volume /mnt/nfs        

And make sure the NIC has:

bash
ethtool -G eth0 rx 4096 tx 4096 
ethtool -K eth0 gro on gso on tso on        

Summary for High-Speed NFS on Linux:


Article content

If you're running a Linux-based post-production environment and pushing high-speed networking, take the time to tune both the client and server. It's not always about the storage hardware sometimes, it's about giving your software the elbow room it needs to run full throttle.


Final Thoughts

The point here isn’t that one OS or protocol is better than another it’s that no storage system performs well without a properly tuned client. And yet, client tuning is rarely discussed in storage vendor documentation or deployment plans.

If you’re building a high-speed network, make sure your workstations are part of the performance equation. Often, a few minutes of configuration change on the client side can unlock terabytes of throughput and eliminate hours of troubleshooting.

Have you tuned your system for high-speed NFS or SMB workflows? I’d love to hear what worked (or didn’t) in your environment.


#PostProduction #VFXTech #MediaWorkflows #StoragePerformance #NetworkTuning #HighSpeedNetworking #25GbE #40GbE #100GbE #NFS #SMB #LinuxPerformance #MacNetworking #WindowsSMB #WorkstationOptimization

To view or add a comment, sign in

More articles by Derek Zavada

Insights from the community

Others also viewed

Explore topics