Introduction: We wanted to benchmark the Enforcer to get an accurate idea of the performance impact it has. We wanted to benchmark with a realistic workload so we decided to protect Apache's data and calculate how much the Enforcer slows down Apache's ability to serve pages. We also wanted to test the impact of having data on the loopback filesystem. Overall: In keeping with the theme of a realistic benchmark, we acquired the static web pages of all of the Athletics departments here at Dartmouth as well as the Apache log of all the hits against those pages on a weekday. The dataset was 19,623 files with a total size of 664 megabytes. The log file consisted of 20,741 URLs, of which 15.0% were requests for files that did not exist. In order to perform the benchmark we adapted a perl program written to benchmark Apache. The perl program reads the URLs into memory. It then forks 15 children who each attack the target with all the URLs in a unique random order, that is, each child attacks with a different order than the other childern. In order to make each benchmark as repeatable as possible, we seeded the random number generator with the same seed for every benchmark. This ensured the actual order the URLs each child uses were entirely repeatable for each benchmark. Each child attacks the target as fast as possible and reports the length of time each hit took to the parent. The parent compiles a variety of statistics about the attack and reports a summarized version to the user after all the children finish. The primary statistic reported is the average number of hits/second. Machine setup: The machine running the benchmark program was a dual processor Intel Xeon CPU running at 2.00GHz, 512M of memory, running Linux kernel version 2.4.20-ac1, and Debian/stable. The machine running the Enforcer was an IBM Netvista 8310 desktop machine with a Pentium 4 CPU at 2.00GHz, 128M of memory, one IDE hard drive, running Linux kernel 2.6.0-test7 (no preempt), and Debian/unstable. Each machine had a 100 megabit full-duplex ethernet network, plugged directly into the same switch. Benchmark setup: When the Enforcer's database was built, 156 SHA1s (out of the 19,623 total) were intentionally modified to be incorrect. This allowed us to see that the Enforcer was actually working because it would log a message every time one of these files was accessed. We ran a total of eight different benchmarks: (http, https)(plain, enforcer, loopback, enforcer/loopback) For the plain and enforcer tests, the web data resided on the main filesystem. For the loopback and enforcer/loopback tests, the web data resided inside the loopback filesystem. For the HTTP benchmarks (which executed quite quickly) each child repeated the URLs three time for a total of 932,940 hits per run. For the HTTPS benchmarks (which executed very slowly) each child only did one run through the URLs for a total of 310,980 hits per run. We ran each benchmark three times and averaged the results. The average size of each Apache request was 22.768 k. Analysis: Table [see TechReport] was generated by running vmstat on the Enforcer machine while the benchmark was taking place. The % CPU was calculated by adding the 'us' (user) and 'sy' (system) vmstat fields, while the % I/O was taken from the 'wa' (waiting I/O) field. From this data we conclude the HTTP benchmarks were predominately I/O bound while the HTTPS benchmarks were predominately CPU bound. The actual impact of the Enforcer is slight, only 1.5% for an I/O load and 1.3% for a CPU load. As the amount of CPU work goes up relative to the amount of data loaded from disk, the impact of the Enforcer should diminish. The impact of putting data into the loopback filesystem is more significant. For an I/O load the performance impact is 39.4%. For a CPU load the impact was only 3.7%. The impact of putting the files on the loopback and then protecting them with the Enforcer resulted in the greatest slowdown. For the I/O load the slowdown was 48.5% and for a CPU load the slowdown was 6.8%. Error analysis: The percent deviation between runs was quite small in relation to the percent slowdown caused by the Enforcer in all but two cases. In the case of the (https, enforcer) benchmark the deviation between runs was 2.35% while the slowdown by the Enforcer was only 1.2%. In the case of the (https, loopback) benchmark the deviation between runs was 3.61% while the slowdown was only 3.7%. Because of the inter-run deviation, the actual slowdown might be higher or lower than calculated values..