Introduction A number of researchers have questioned whether RP1 could run fast enough to support experiments requiring multiple simulation runs. The time-trial experiment described below was intended to provide a preliminary evaluation of RP1 performance. In particular, I wanted to know if the model could run at faster-than-real-time speeds. Background Simulator performance is dependent on two general issues: a. Java performance issues (looping and number crunching) b. overhead costs related to the client-server architecture used in the simulator. Early in its exitence, some reviewers described Java as slow. Not all of that was F.U.D. In its earliest forms, Java was strictly an interpreted language and did not compare well to highly optimized, compiled languages. Since then, the introduction of JIT compilers has improved Java performance substantially and processing speed is seldom an issue. Even so, the question remained: would performance be adequate for the crunching required by a simulator? This issue is especially important since the sophistication of the RP1 model is expected to increase, thus bringing about an increase processing load. Additionally, the RP1 simulator depends on tcp/ip based client-server communications. These have an overhead both in terms of interprocess-communication and process threading mechanisms. The Experiment To measure performance of the RP1 simulator, I implemented a simple client which performed a random walk about the simulated enviromemnt. The simulated robot performed a series of short movements (avg. duration 0.7 seconds) in random directions. 1000 movements were performed. Each movement resulted in the exchange of 3 transactions between the client and the server: client requests movement server notifies client of movement inception server notifies client of movement completion The test was performed with the simulator's Graphic User Interface turned off, and the simulation clock speed set to 20 times normal. Time trials were run on three architectures: Sun Ultra Sparc I (a vintage 1995 machine) Server/Client on same machine Server/Client on separate host 166 MHz Pentium running Windows 95 450 MHz Pentium II running Windows NT all Windows tests were run with Client and Server on same machine 450 MHz Pentium II running Linux using IBM's Java JDK 1.1.6 (same machine as NT test) The network test was conducted on a well-administered, but heavily loaded LAN. Most experiements used versions of Java 1.1.8 with JIT compilers which had been downloaded from Sun Microsystems. The Linux system which used IBM's JDK 1.1.6. Results Each experiment out-ran real-time speeds by the indicated factors Sun (single host): 9.5 Sun (two hosts): 6.5 Windows 95: 1.6 Windows NT: 4.4 Linux: 33.9 Remaining Issues The Windows boxes had surprisingly poor performance when compared to slower Sun machines and the same Intel hardware running Linux. Has anyone had any experience with this kind of thing? Based on some vague, and unreliable, calibration in the model, it appears that the poor results on the Windows architectures were due to inter-process communication or multi-process (thread) management issues. The poor results on the Windows 95 box can be attributed to the fact that it was running an older version of Windows (circa 1996) dating from a time before Microsoft had assigned much priority to its implementation of inter-process communications. The Windows NT box's lack-luster performance is harder to explain since that is just the problem that NT is supposed to address. Fearing that perhaps the JIT compiler was not being used on the Windows boxes, I repeated the experiment using the -nojit option available with the java virtual machine. The fact that the program was substantially slower with the -nojit option confirmed that the JIT had, in fact, been used during the first experiment. The real surprise, of course, was the outstanding performance of the Linux box. This value seemed almost "too good to be true." It was verified through several runs and a careful check of the output logs for the program. Questions Does anyone have any thoughts (aside the obvious Microsoft bash) on while NT performance should be so much slower and any ideas on how it might be improved?