1. 23 Nov, 2009 1 commit
    • Ingo Molnar's avatar
      ring-buffer benchmark: Run producer/consumer threads at nice +19 · 98e4833b
      Ingo Molnar authored
      The ring-buffer benchmark threads run on nice 0 by default, using
      up a lot of CPU time and slowing down the system:
      
         PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
        1024 root      20   0     0    0    0 D 95.3  0.0   4:01.67 rb_producer
        1023 root      20   0     0    0    0 R 93.5  0.0   2:54.33 rb_consumer
       21569 mingo     40   0 14852 1048  772 R  3.6  0.1   0:00.05 top
           1 root      40   0  4080  928  668 S  0.0  0.0   0:23.98 init
      
      Renice them to +19 to make them less intrusive.
      
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      LKML-Reference: <new-submission>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      98e4833b
  2. 19 Nov, 2009 1 commit
  3. 17 Nov, 2009 3 commits
  4. 12 Nov, 2009 2 commits
    • Steven Rostedt's avatar
      tracing: do not disable interrupts for trace_clock_local · 8b2a5dac
      Steven Rostedt authored
      Disabling interrupts in trace_clock_local takes quite a performance
      hit to the recording of traces. Using perf top we see:
      
      ------------------------------------------------------------------------------
         PerfTop:     244 irqs/sec  kernel:100.0% [1000Hz cpu-clock-msecs],  (all, 4 CPUs)
      ------------------------------------------------------------------------------
      
                   samples    pcnt   kernel function
                   _______   _____   _______________
      
                   2842.00 - 40.4% : trace_clock_local
                   1043.00 - 14.8% : rb_reserve_next_event
                    784.00 - 11.1% : ring_buffer_lock_reserve
                    600.00 -  8.5% : __rb_reserve_next
                    579.00 -  8.2% : rb_end_commit
                    440.00 -  6.3% : ring_buffer_unlock_commit
                    290.00 -  4.1% : ring_buffer_producer_thread 	[ring_buffer_benchmark]
                    155.00 -  2.2% : debug_smp_processor_id
                    117.00 -  1.7% : trace_recursive_unlock
                    103.00 -  1.5% : ring_buffer_event_data
                     28.00 -  0.4% : do_gettimeofday
                     22.00 -  0.3% : _spin_unlock_irq
                     14.00 -  0.2% : native_read_tsc
                     11.00 -  0.2% : getnstimeofday
      
      Where trace_clock_local is 40% of the tracing, and the time for recording
      a trace according to ring_buffer_benchmark is 210ns. After converting
      the interrupts to preemption disabling we have from perf top:
      
      ------------------------------------------------------------------------------
         PerfTop:    1084 irqs/sec  kernel:99.9% [1000Hz cpu-clock-msecs],  (all, 4 CPUs)
      ------------------------------------------------------------------------------
      
                   samples    pcnt   kernel function
                   _______   _____   _______________
      
                   1277.00 - 16.8% : native_read_tsc
                   1148.00 - 15.1% : rb_reserve_next_event
                    896.00 - 11.8% : ring_buffer_lock_reserve
                    688.00 -  9.1% : __rb_reserve_next
                    664.00 -  8.8% : rb_end_commit
                    563.00 -  7.4% : ring_buffer_unlock_commit
                    508.00 -  6.7% : _spin_unlock_irq
                    365.00 -  4.8% : debug_smp_processor_id
                    321.00 -  4.2% : trace_clock_local
                    303.00 -  4.0% : ring_buffer_producer_thread 	[ring_buffer_benchmark]
                    273.00 -  3.6% : native_sched_clock
                    122.00 -  1.6% : trace_recursive_unlock
                    113.00 -  1.5% : sched_clock
                    101.00 -  1.3% : ring_buffer_event_data
                     53.00 -  0.7% : tick_nohz_stop_sched_tick
      
      Where trace_clock_local drops from 40% to only taking 4% of the total time.
      The trace time also goes from 210ns down to 179ns (31ns).
      
      I talked with Peter Zijlstra about the impact that sched_clock may have
      without having interrupts disabled, and he told me that if a timer interrupt
      comes in, sched_clock may report a wrong time.
      
      Balancing a seldom incorrect timestamp with a 15% performance boost, I'll
      take the performance boost.
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      8b2a5dac
    • Steven Rostedt's avatar
      ring-buffer: Add multiple iterations between benchmark timestamps · a6f0eb6a
      Steven Rostedt authored
      The ring_buffer_benchmark does a gettimeofday after every write to the
      ring buffer in its measurements. This adds the overhead of the call
      to gettimeofday to the measurements and does not give an accurate picture
      of the length of time it takes to record a trace.
      
      This was first noticed with perf top:
      
      ------------------------------------------------------------------------------
         PerfTop:     679 irqs/sec  kernel:99.9% [1000Hz cpu-clock-msecs],  (all, 4 CPUs)
      ------------------------------------------------------------------------------
      
                   samples    pcnt   kernel function
                   _______   _____   _______________
      
                   1673.00 - 27.8% : trace_clock_local
                    806.00 - 13.4% : do_gettimeofday
                    590.00 -  9.8% : rb_reserve_next_event
                    554.00 -  9.2% : native_read_tsc
                    431.00 -  7.2% : ring_buffer_lock_reserve
                    365.00 -  6.1% : __rb_reserve_next
                    355.00 -  5.9% : rb_end_commit
                    322.00 -  5.4% : getnstimeofday
                    268.00 -  4.5% : ring_buffer_unlock_commit
                    262.00 -  4.4% : ring_buffer_producer_thread 	[ring_buffer_benchmark]
                    113.00 -  1.9% : read_tsc
                     91.00 -  1.5% : debug_smp_processor_id
                     69.00 -  1.1% : trace_recursive_unlock
                     66.00 -  1.1% : ring_buffer_event_data
                     25.00 -  0.4% : _spin_unlock_irq
      
      And the length of each write to the ring buffer measured at 310ns.
      
      This patch adds a new module parameter called "write_interval" which is
      defaulted to 50. This is the number of writes performed between
      timestamps. After this patch perf top shows:
      
      ------------------------------------------------------------------------------
         PerfTop:     244 irqs/sec  kernel:100.0% [1000Hz cpu-clock-msecs],  (all, 4 CPUs)
      ------------------------------------------------------------------------------
      
                   samples    pcnt   kernel function
                   _______   _____   _______________
      
                   2842.00 - 40.4% : trace_clock_local
                   1043.00 - 14.8% : rb_reserve_next_event
                    784.00 - 11.1% : ring_buffer_lock_reserve
                    600.00 -  8.5% : __rb_reserve_next
                    579.00 -  8.2% : rb_end_commit
                    440.00 -  6.3% : ring_buffer_unlock_commit
                    290.00 -  4.1% : ring_buffer_producer_thread 	[ring_buffer_benchmark]
                    155.00 -  2.2% : debug_smp_processor_id
                    117.00 -  1.7% : trace_recursive_unlock
                    103.00 -  1.5% : ring_buffer_event_data
                     28.00 -  0.4% : do_gettimeofday
                     22.00 -  0.3% : _spin_unlock_irq
                     14.00 -  0.2% : native_read_tsc
                     11.00 -  0.2% : getnstimeofday
      
      do_gettimeofday dropped from 13% usage to a mere 0.4%! (using the default
      50 interval)  The measurement for each timestamp went from 310ns to 210ns.
      That's 100ns (1/3rd) overhead that the gettimeofday call was introducing.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      a6f0eb6a
  5. 02 Nov, 2009 3 commits
  6. 29 Oct, 2009 9 commits
  7. 24 Oct, 2009 4 commits
  8. 22 Oct, 2009 8 commits
  9. 21 Oct, 2009 9 commits