The DNF Empty Row Issue Explained

Oracle 19c Direct NFS (dNFS) shows v$dnfs_servers as empty until the database opens a file on a configured NFS mount. See how to verify dNFS.

DNF Empty Row Issue

Summary

Oracle’s Direct NFS (dNFS) v$dnfs\_servers view is empty until the database actively accesses a file on a configured NFS mount, which can cause confusion during dNFS configuration verification.

image_pdfimage_print

This blog on the DNF empty row issue originally appeared on Andrew Sillifant’s blog. It has been republished with the author’s credit and consent. 

You will end up here: v$dnfs_servers returns no rows. You’ve checked everything. The library is linked. The alert log shows dNFS loaded. Your oranfstab is syntactically perfect. The NFS mounts are there.

And you’re convinced it’s broken; I was too.

I spent three hours proving dNFS was configured correctly while simultaneously proving it wasn’t working. Turns out I was measuring the wrong thing the entire time.

The answer: when it actually opens a file on an NFS mount. Not before.

Context

I was setting up RMAN backup benchmarks on Oracle 19c against Pure Storage® FlashBlade® to compare kernel NFS versus Direct NFS throughput. Standard Wednesday—establish baseline, configure dNFS, measure delta, quantify the value proposition.

The specific challenge: Make a single database backup as fast as possible.

A single NFS mount has throughput limits: network bandwidth, client connection limits, and protocol overhead. Pure Storage FlashBlade can deliver far more aggregate bandwidth than any single NFS client connection can saturate.

Solution: multiple independent NFS paths.

Setup:

  • 4 separate FlashBlade data VIPs (10.21.221.25, 26, 27, 28)
  • 4 NFS exports (/rman-bench-06a through /rman-bench-06d)
  • 4 mount points (/u01/compass/s100a, s100b, s100c, s100d)
  • RMAN configured with multiple channels, each channel writing to a different mount point

Each mount point provides an independent I/O path with its own IP address, its own network connection, and its own bandwidth allocation. RMAN parallelizes across these channels, achieving bandwidth aggregation at the application layer.

This is standard practice for high-performance RMAN backups to NFS when you need to maximize throughput for a single database. You’re not backing up multiple databases—you’re splitting one database’s backup across multiple parallel streams to different IP addresses.

The benchmark question: Does dNFS improve throughput when you already have this multi-mount parallelism configured? Does the dNFS optimization stack with the architectural parallelism, or does the multi-mount design already extract maximum performance from kernel NFS?

That’s what I was trying to measure when I got stuck on configuration verification.

Configuring dNFS should take 10 minutes. It took me three hours because I misunderstood what v$dnfs_servers actually shows.

The Diagnostic Path (and Where It Leads Nowhere)

Standard verification after configuring dNFS:

This triggers the troubleshooting cascade. You start checking everything systematically because the configuration looks correct, but the verification fails.

The symlink points to libnfsodm19.so (dNFS), not libodmd19.so (stub). Correct.

After exhausting the troubleshooting options, I tested whether dNFS would actually function:

It was working the entire time.

What v$dnfs_servers Actually Shows

v$dnfs_servers only shows active connections, not configured servers. The view is empty until Oracle actually opens files on configured NFS mounts. Oracle’s documentation confirms this behavior, but doesn’t make it obvious—you have to read between the lines. Retry.

If Oracle hasn’t accessed any files on your configured NFS mounts, the view is empty. This is working as designed. The configuration is correct. The library is loaded. The system is ready. But until Oracle actually opens a file on one of those mounts, there’s nothing to display in the view.

The Schrödinger reference: The configuration exists in a superposition of states (working/broken) until you observe it by performing I/O. The observation collapses the wavefunction.

This cost me three hours because I was asking the wrong question.

Configuration Reference

If you’re actually setting this up (not just debugging a non-existent problem), here’s the complete process:

If libodm19.so points to libodmd19.so (the stub), dNFS is disabled. Fix it:

Location: $ORACLE_HOME/dbs/oranfstab

Format requirements:

  • export: and mount: on the same line, space-separated
  • Two-space indentation
  • nfsv4, not NFSv4.1 or nfs4.1

Example:

The export path must match exactly what the NFS server exports. Verify with:

dNFS requires OS-level NFS mounts. It doesn’t create them; it intercepts I/O to existing mounts.

Only required if you’re enabling dNFS for the first time (changing the library symlink). If you’re just modifying oranfstab, changes apply immediately when Oracle next accesses files on those mounts.

Alert log confirms library load:

Should show:

Actual functional test:

If v$dnfs_servers shows rows after creating the tablespace, dNFS is working.

Performance Context

The reason for configuring dNFS was RMAN backup benchmarking against Pure Storage FlashBlade. dNFS typically delivers 30%-50% better throughput for large sequential I/O compared to kernel NFS because:

  1. Bypasses kernel NFS client (direct path from Oracle process to network stack)
  2. Better parallelism (can use multiple network channels simultaneously)
  3. Oracle-optimized I/O patterns

For RMAN workloads specifically—large, sequential writes to NFS—the performance delta is measurable and significant.

But only if you configure it correctly and don’t waste three hours thinking it’s broken when the verification methodology is wrong.

Monitoring Views

Once dNFS is actually in use:

Summary

v$dnfs_servers showing “no rows selected” doesn’t mean dNFS is broken. It means Oracle hasn’t opened any files on your configured NFS mounts yet. The view shows active connections, not configuration state.

The alert log message confirming dNFS library load is the actual indicator. The rest is runtime behavior.

Ask the right question, get the right answer. The question wasn’t “why isn’t dNFS working?”—it was “what does

v$dnfs_servers actually show?”

FAQ

The view only reports active Direct NFS connections. It stays empty until Oracle opens a file that lives on an NFS mount managed by dNFS. The system can be fully configured, the library can be loaded, and the mounts can be correct, but the view stays blank until actual I/O happens.

No. It simply means the database has not yet accessed a file on an eligible NFS mount. Once Oracle performs I/O on a file stored on that mount, the view will populate and show the servers in use.

The alert log contains the confirmation. Searching it for the Direct NFS entry will show whether Oracle loaded the correct ODM library at startup. This is the reliable indicator that dNFS is enabled, not the server view.

The checks confirm configuration state, not runtime activity. The library symlink, the oranfstab syntax, and the OS mount points can all be correct while the verification query returns no rows if Oracle has not yet opened any files on those mounts.

Creating a tablespace with a datafile located on one of the NFS mount points. The moment Oracle opened that file, dNFS became active and the view showed the expected entries.

A functional test is the simplest method. Create a small file on an NFS mount through Oracle, then query the server view. If the configuration is correct, the view will populate immediately. The alert log entry also confirms the library load.

They provided independent I/O paths in order to maximize throughput for a single RMAN backup. Each mount point had its own IP address and network allocation, allowing RMAN channels to parallelize and aggregate bandwidth.

It often does. dNFS can optimize large sequential I/O and reduce kernel overhead, which is valuable for RMAN backups that push high throughput. The point of the benchmark was to measure how much additional benefit dNFS provided on top of the parallel mount design.