Postcards from the Fridge

Thursday, June 29, 2006

Bet ya bite a chip

I wanted to get something out earlier, but I've had a bit of nasty fever for the past few days. I promised in one of the blog comments to release the scripts that I am using for testing. I will do this very soon (probably in the next blog entry). I have some cleanup that I need to do.

fwrite->strace
In one of my past entries I mentioned that I was having trouble strace'ing startx, and that is what prevented me from using Federico's original patches. Well, that was beginning to bite me. First, my fwrite method did not handle threading very well. A message could get split a log was sent simultaneously by two different threads. (I think I could fix that though). Second, and more important, Federico's excellent visualization script wouldn't work.

I determined that figuring out how to get strace to work was really the only way to go.

First, it appears that the "SUID" problem was with strace and NOT with SELinux as I originally thought. According to the strace man-page it is possible to create a suid strace binary, and that will allow suid binaries to work properly. I created an suid-root strace, and that allowed me to "strace X". However, when I tried to launch gnome with this binary it gave me errors (some thing about "use a helper application for SUID stuff").

I had to take a different approach, so I changed my .xinitrc to launch an straced version of gnome rather than stracing startx directly. The suid-root X would start normally, and then the stracing would begin. That worked!

Here's my currently working .xinitrc:

.xinitrc
#!/bin/bash
exec strace -e clone,execve,open,access -ttt -f -o /tmp/gnome.log /home/gnome/bin/jhbuild run gnome-session

Notice that I added the "-e" option to Federico's default "-ttt -f" command line. This will ONLY record the clone, execve,open,access system calls. This speeds up the run-time, and reduces the amount of stuff that needs to be dumped in the log file.

All is not rosey
Unfortunately, now that am using strace, the start-up times seem a bit more sluggish. However, I have to update my scripts to automatically figure out how long things are taking. (I've just been reading log times manually.)

Also, and more importantly, my session will no longer shutdown by itself with strace turned on. I don't know if this is a bug in my distribution (with strace/ptrace) or an actual race within the gnome code. It makes it a real pain when I want to run 40 tests because I have to sit there and press cntrl-alt-backspace. Even then things don't get properly cleaned up. I am going to work on a big hammer to fix this.

1000 words

Federico suggested that I rearrange the order of nautilus vs. gnome-panel in the gnome-session file, and see if that changes the variability that I have been seeing. I don't have a definitive answer on that yet. However, I did notice that nautilus's execution does become interlaced with the gnome-vfs-daemon, and the gnome-panel. I wonder if it is possible to start the gnome-vfs-daemon before both of them, so nautilus won't have to wait for it to startup and/or compete with the panel for its resources. It would be really nice to know all of the dependencies of the gnome daemons. That way we could start the most needed ones earlier, and the non-conflicting ones in a parallel.

In Any event, here's a picture of my current startup. Note: most of the functions with a name of "(null)" are probably the gnome-vfs-daemon.




NOTE: The long pause at the end is because the shutdown of gnome has hung...

Note to ESD: DON'T EVER RUN


I noticed this before when I was shutting things down, but I saw it again on my logs, and it was really annoying me. I have esd completely disabled, yet on shutdown the esd process always starts.

I traced it down to "gnome-session-save". However, I don't yet know which part is calling it.

gnome-session-save calls this:
execve("/bin/sh", ["/bin/sh", "-c", "/home/gnome/gnome2/bin/esd -term"...], [/* 35 vars */]) = 0
which in turn calls this:
execve("/home/gnome/gnome2/bin/esd", ["/home/gnome/gnome2/bin/esd", "-terminate", "-nobeeps", "-as", "2", "-spawnfd", "15"], [/* 35 vars */]) = 0

This is the pebble in my shoe.

Cookbook
As I mentioned above, i don't yet have a cookbook that other people can follow in my footsteps and reproduce things. I will do that for the next blog. I need to do a little bit of clean-up, but then I can provide a patch.

Thursday, June 22, 2006

We're hunting wabbits

I've been trying to track down the variability that I mentioned in the last blog entry. It is beginning to feel like a threading/scheduling/race problem.

When gnome is starting up, everything is competing for the CPU. 'gnome-panel' and 'nautilus' (among other things) are both trying to initialize at the same time.

So, I've noticed two different things.
  1. The first call to g_signal_connect in 'nautilus_application_instance_init' is fast in the fast case, and slow in the slow case.

    In particular, this call: 'g_signal_connect_object (gnome_vfs_get_volume_monitor (), "volume_unmounted", G_CALLBACK (volume_unmounted_callback), application, 0);'
    takes a long time (1.3 sec) when things are going slowly, but is very quick (0.2) secs when things are fast. This functions appears to add one or more threads that do a bunch of work.

    I've instrumented various functions in the file containing 'gnome_vfs_get_volume_monitor', and I can see that they are running, but I am not sure why. (I have yet to trace the execution from the signal connect to these functions actually running)

  2. In the fast case, gnome-panel runs the code AFTER nautilus startsup. In the slow case, the gnome-panel execution is interspersed with the nautilus execution.
Like I said above, this feels like a locking/scheduling problem. I suspect it is something like:
In the slow case: nautlius needs to do some work, but somebody else (gnome-panel?) holds the lock, so nautlius sleeps a LONG time after it realizes this.

In the fast case: nautilus grabs the lock, and keeps going until it is ready to run. Another possibility: For the fast case, it also might be that the gnome-volume-monitor stuff has already been initialized before nautilus needs it. However, my instrumentation shows that in the fast case, nautilus is the FIRST thing to call the gnome-volume-monitor code, so that wouldn't explain things.

Sunday, June 18, 2006

Strange and Magical world

This IS a reproduction

I've been trying to reproduce the slow nautilus startup times that Federico had been seeing on the CVS version of Gnome that I have checked out. I was reading through code first, but that was really the wrong thing to do. I should have actually timed how long things were taking, and THEN if it was still a problem read through the code.

That's what I did.

On the good news/weird new side, my total nautilus startup times (from the beginning of nautilus main to the expose_event sent to the icon_container) have been much much lower than the times that Federico reported in his blogs. My total startup time avg was ~2secs. His time between gtk_main and the icon_container expose event was 2 seconds! (His time was only including the time between when control was handed over to gtk, and when things were finally painted. Mine included the whole nautilius initialization AND the time for gtk to paint things.)

I don't know WHY things would be so much better. I really wonder if strace was skewing his results like it was doing for the X server. I was also doing this from a WARM cache (his might have been from a cold cache.) I'm not sure.

Highly Variability

I wrote a few wrapper scripts to automatically launch gnome, time nautilus's startup, and then gather the results. I was curious to see how consistent the startup time was.

Short answer: It wasn't. I had a minimum time of 0.398 sec and a maximum time of 4.411 sec.

I've sorted the 44 runs that I made by time, so here's a picture of it:




When I used to do SPEC CPU benchmarking, the variability from run-to-run would drive us crazy. Some applications would be very fast in one run, and slow in another. Ideally, you could have a run where all of the applications ran at their top speed. It was frustrating.

The compiler writers saw it differently. If there was a variation, that meant it was possible for the application to always run as fast as the fastest time if they could only figure out what was slowing it down in the other times.

The nice thing about the SPEC runs was that each of the programs were independent, so you really should be able to optimize things for each app to run at it's ideal time. Gnome is a little different, since it is a complicated interlocking system. The fast runs of Nautilus MIGHT be because some other startup program has already initialized the components that nautilus needs. Hower, it does say that we have the possiblity for a VERY fast nautilus startup.

Variablility makes it REALLY hard to optimize things, and can often times hide a race, so it really needs to be investigated.

Ideally we could get nautilus to start in 0.398 secs. Why doesn't it do it every time? (This is what I'll be looking at for the next blog entry. Stay tuned... )

Cold cache vs. Warm cache

So far, I've been looking at the warm cache startup time of Nautilus. Why?

Basically I see the warm cache time as the "best we can do". As we eliminate I/O in the cold cache time, we will eventually get closer and closer to the warm cache time.

So while it does make sense to also optimize the cold cache time (as this will be what the first user logging on to a system will experience), starting with the optimization of the warm cache will allow me to understand and optimize the different parts of nautilus, and then see how adding disk I/O slows them down. Also, I/O is scary. ;-) (Ie, Once I/O is added to the mix, things get much less repeatable. Although the "flush the cache" app should help with that.)

Thursday, June 15, 2006

She sells Seashells

No concrete progress right now, as I've basically been doing research.

Reading Code
I've been spending alot of 'heads-down' time reading through the nautilus code and docs in the nautilus directory. I'm beginning to understand how things fit together, and I can see the parts of nautilius that Federico analyzed/optimzed before.

I am currently looking at why there is such a long time between the call to gtk_main, and the first call to "nautilus-icon-container.c expose_event()". I've been stepping through the code with gdb just to get some idea of how it all fits together. This is horrible from a timing point of view, but at least I can see what parts of this enourmous system are called when.

I know that this has been said before, but the asynchronous nature of nautilus makes it tricky to figure out what is happening and why.

Tracing
I also understand a little bit about how Federico's tracing code works. It is a clever (ab)use of the 'access ' system call to basically put an arbitrary message in strace's message buffers. Strace provides the time stamps, and handles the case where multiple threads might be calling the log function at the same time.
Neat.

BTW. If we would wrap this tracing in a check for an env variable (GNOME_TRACING? GNOME_LOG?), we should be able to include this in the source and ship it. That way we can trace/time this stuff in the field. A call to getenv is SO lightweight, that benefit should outweigh the cost.

....
I haven't been able to use Federico's patches directly because I can't get strace to work on 'startx' when running FC5. It probably doesn't like a user stracing a SUID (X) executable. I wrote another variation which doesn't use strace, and instead opens a file, and dumps a timestamp and the message when the log function is called.

BTW. Does anyone know how to make strace work on FC5?

When I start to 'strace startx' it doesn't work and I get some SElinux errors. I know this is vague, but maybe it rings a bell for someone.
X startup

Last time I talked about some Xorg developers getting a 2X imrpovement in startup time. Turns out it was closer to 5%. (Which is still nice).

Why 2X? The original timings where when running strace on X. Since the patchs removed MANY MANY system calls, it greatly decreased the overhead generated by strace, and as a result greatly sped-up the startup.

Graduated
On a personal note, I got my Masters last Thursday. I flew from Pittsburgh, PA to Cambridge, MA and back again within 24 hours. I flew through Philadelphia and sat in the airpot for HOURS as weather conditions slowed down the flights. (Why can't all airports be like Pittsburgh and give free wifi?)

I didn't get home to 3am, and worked the next morning. Ugh.

--Phil

Monday, June 05, 2006

Sparklines and other goodies

Visualization of Perf data

I'm spending a little bit of time setting up my tools to get starting with the profiling. I want to be able to combine the information that Federico's patches provides with the CPU usage information provided by bootchart. Once I have this working, I will be able to run a test, and see all of the perf data in a single picture.

I'm using Tufte's ideas of sparklines (http://en.wikipedia.org/wiki/Sparkline) to convey how the CPU usage changes over time. I'm adapting a python script that can be used to generate sparklines (http://bitworking.org/projects/sparklines/ ).

Right now, it will read the data files generated by the bootchartd daemon, so you can pick your visualization (native bootchart -or- sparkline) after the data has been generated.

I added the ability to automatically add tags on different events, so that I can integrate the profiling events generated by Federico's patches into the graph.

Here's a current sample: (I know, it needs a legend.. Right now the graph is of the process's CPU usage (combined user+system))



(OH, BTW: This patch (http://bugzilla.gnome.org/show_bug.cgi?id=319682) to add sparklines should DEFINITELY be included in gnome-system-monitor)

Gathering Profiling Data
I've been thinking a bit about how Federico added the current profiling information, and I think it is the right way to go. I don't need the capabilities of DBUS right now, because I can do all of the analysis after the run has completed.

I think that Federico's profiling changes could be included in the mainline of gnome if we wrapped them in a test of an environmental variable (say GNOME_PROFILING). A getenv is trivial compared to the other stuff that goes on in the libraries/apps, and it would allow us to profile stuff in the field as well.

Once I finish up the visualization stuff, I will add the that to his profiling patches. I am probably also going to add a printout of the "pid" (rather than program name), so that I merge the data from bootchart and the patches.

X startup time

It looks like Thomas Leonard and Daniel Stone have come up with a way to dramatically speed up X server startup.

Combining a patch in bugzilla and patch developed on the list by Thomas and Daniel, the there is almost a 2X speedup in the X server startup time.

For more info follow the thread started here:
http://lists.freedesktop.org/archives/xorg/2006-June/015746.html

Thursday, June 01, 2006

Slow X startup & Tracking initialization

After discovering that X startup takes a while even with the simplest configuration, I did a little more digging, and found that the PCI probe code was accessing nonexistent "/proc/pci" files about 8000 times!

Two people responded to the timing request (Thanks guys!), and their startup times were about the same as mine. Oddly enough my wife's nvidia base laptop was 2.2 second, but it still have the hammering of "/proc/pci".

I've send mail off the X developers, and hopefully they can fix it.

More details at:
http://lists.freedesktop.org/archives/xorg/2006-June/015746.html

Update:
It looks like davejhas already found this: http://kernelslacker.livejournal.com/34960.html

--

Also I've talked with Federico, and J5 about how to track when different applications startup and complete initialization. It looks like sending timestamped messages over the DBUS is a possibility.

I might also be able to get what I want by recording all of the timestamps of the major events, and then figuring out after the fact what is happening where. (This is pretty much was Federico's timing patches did.) If I sleep for a long enough time before starting the session close, I won't shutdown before everything is complete.