After some test and thinking, I think biti_rainy's conclusion is right: the number for buffer gets and disk reads is in blocks, not number of times. biti_rainy's test, however, doesn't seem to prove that. First, scanning indexes is db file sequential read, which always scans 1 block at a time. In that case, there's no difference between the two units we don't agree on. Second, proving buffer gets is logical reads, which equals db block gets plus consistent gets, doesn't prove the number is blocks or number of times. It could be either blocks or number of times in both v$sqlarea (or v$sql) and in autotrace Statistics output at the same time. But looking at tkprof output and system call tracing can almost certainly prove the unit for disk reads is block. My database is 9.0.1.3 running on Solaris 2.6. db_file_multiblock_read_count is 8. I create a table with 500,000 rows. No index. Set 10046 event at level 12 in order to look at wait events. Set autotrace on. Select count(*) from thistable. Plan shows full scan on the table. v$sql.buffer_gets shows 764, disk_reads 760, which are equal to autotrace Statistics consistent gets and physical reads, respectively. grep ^WAIT on the trace file for the session shows 8 'db file sequential read's (and of course p3=1), 95 'db file scattered read's (p3=8). truss -t read,pread,readv,write,pwrite,writev -c -p also shows 95 calls of readv, consistent with scattered read wait events. 95 x 8 = 760, exactly the same as disk_reads. This proves that 760 is indeed the number of disk blocks that were read, and they were read in 95 operations of readv(2). Unfortunately, system call tracing doesn't reveal memory accesses (I wish Solaris had an ltrace utility as Linux does). So consistent reads does not tally with anything shown in truss output. But it's reasonable to think that 780 consistent reads are 780 buffers read; if it were 780 times of memory access, that would sound too many. I don't have better evidence to say against that hypothesis, so I assume otherwise. For what it's worth, the following is my tkprof output, which doesn't exactly match what I said above. BTW, doing these tests doesn't need flushing shared pool. Since I'm using 9i, I can alter session set events = 'immediate trace name flush_cache' to conveniently flush buffer cache without bouncing the database. Yong Huang OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 11 0.14 0.33 2 2 0 0 Execute 11 0.03 0.10 0 12 3 3 Fetch 11 0.76 1.11 760 790 0 17 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 33 0.93 1.55 762 804 3 20 Misses in library cache during parse: 10 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 15 0.00 0.00 SQL*Net message from client 14 31.46 31.52 db file sequential read 3 0.00 0.00 db file scattered read 95 0.06 0.29 OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 35 0.05 0.05 0 0 0 0 Execute 37 0.07 0.37 1 1 5 3 Fetch 68 0.00 0.03 8 94 0 45 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 140 0.12 0.46 9 95 5 48 Misses in library cache during parse: 24 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ db file sequential read 9 0.01 0.02 12 user SQL statements in session. 34 internal SQL statements in session. 46 SQL statements in session.