Monday, November 06, 2006

OSDI Work-in-Progress Session

Please comment on the Work-in-Progress (WIP) Session.

The OSDI WIPs are listed at
http://www.usenix.org/events/osdi06/wips.html

5 Comments:

Blogger Chris LaRosa said...

I'd like to make an open request for systemic bugs people have encountered that were tricky to pin down. I'd like to try to detect them using the framework I described in "Pattern Mining Kernel Trace Data to Detect Systemic Problems" this afternoon. The framework seems to do well finding systemic problems that have frequent patterns of bad behavior -- I'd love to get some examples of bugs people have found to see how our system does with them.

Thanks,
Chris LaRosa

clarosa@emory.edu

6:50 PM  
Blogger Troy Ronda said...

Our anti-phishing tool (iTrustPage) is available at: http://www.cs.toronto.edu/~ronda/itrustpage/

Thanks,
Troy Ronda

12:13 PM  
Blogger agmiklas said...

First round of WIPS summaries.

==========

Taking the Trust out of Global-Scale Web Services
Nikolaos Michalakis, nikos () cs ! nyu ! edu

If you were to contract out the hosting of a dynamic Web site to a number of content delivery networks, how could you be sure that every system serving your customers was running the exact software you supplied to the CDNs? This problem becomes even more severe for content dynamically generated by ordinary Internet users, where contract law might not be sufficient motivation to ensure that the distributors don't behave maliciously.

Nikolaos is researching ways to certify that dynamic content is served correctly in an environment where the delivery systems are not fully trusted by the content providers. The basic design has clients forward a fraction of the signed responses from one server to other replicas for verification. If the verifying replica computes a different result, it publishes the erroneous signed response so that other hosts can learn of the misbehaving server.

==========

Dynamic Software Updating for the Linux Kernel
Iulian Neamtiu, neamtiu () cs ! umd ! edu
Michael Hicks

Software updates are a necessarily evil. In order to apply them, a service must typically be restarted. For many applications, the downtime that must be incurred in order to restart is undesirable. Worse, updates to system software such as the kernel can necessitate a full reboot of the machine, resulting in an even longer stretch of downtime.

Gingseng, a tool worked on by the presenter, can apply updates to running user-mode services. It does this by determining strategic locations in a service's execution where the code can be safely updated. Patching kernel-code, however, is far more difficult, owing to its low-level and highly concurrent nature. Iulian described some of his work to make Ginseng able to update a live Linux kernel.

==========

Failures in the Real World
Bianca Schroeder, biancas () andrew ! cmu ! edu
Garth A. Gibson

A major challenge in running large-scale systems is that component failure is the norm rather than the exception. Unfortunately, most work on dealing with failures is based on simplistic assumptions rather than real failure data. Bianca has been collecting and analyzing failure data from several real-world installations. The initial results indicate that many commonly used failure models are not supported by real data. For example, the probability of a RAID failure can be an order of magnitude larger, based on actual observation, than one would expect given the standard model which uses exponentially distributed intervals between failures.

Motivated by these initial results, Bianca is continuing to collect and analyze failure data from a large variety of real-world installations. By carefully grounding new failure models in real data, researchers will be able to more accurately model a system's response to component failure.

==========

Pattern Mining Kernel Trace Data to Detect Systemic Problems
Christopher LaRosa, clarosa () emory ! edu
Li Xiong
Ken Mandelberg

Profilers, debuggers, and system call tracers can all be used to diagnose performance issues within a process. However, diagnosing performance problems that result from the interplay of two or more processes can be complicated. For example, determining why an X server is exhibiting poor performance can involve gathering and correlating traces from both the X server and any active X clients. Unfortunately, there are few tools to automatically correlate traces, and programmers must usually resort either to pouring through the gathered data by hand or writing ad-hoc scripts.

Christopher plans to apply data mining techniques to system-wide activity traces. Using these techniques, anomalous conditions that might impact system performance can be automatically detected and isolated, even if they span multiple processes. He provided an example involving a stock-ticker toolbar applet that unnecessarily flooded the X server with requests. His trace analyzer was able to automatically detect the excess of X calls, and pinpoint their origin. He would appreciate if DTrace or LTT users could share their hard-to-find bugs with him so that he can test the effectiveness of his system's automatic detection.

==========

Spectrum: Overlay Network Bandwidth Provisioning
Dejan Kostic

Overlay networks are currently used to efficiently disseminate content. However, due to their decentralized nature, it can be difficult to ensure there is enough oubound capacity to support all receivers as well as prevent other overlays from 'stealing' bandwidth from higher priority services. This presents a serious problem when overlays are used to transfer streaming media, where timely delivery of content is necessary for the system to operate correctly.

Dejan is working on algorithms to measure and disseminate bandwidth availability information throughout an overlay network. By doing so, the system can make globally optimal decisions about how much bandwidth to dedicate to a media stream. This is especially useful when the same overlay is carrying a variety of different content. For example, a BitTorrent-like transfer through the overlay might be permitted as long as it doesn't cause anyone's video stream to drop below a certain bit-rate.

==========

An Infrastructure for Characterizing the Sensitivity of Parallel Applications
to OS Noise
Kurt B. Ferreira, kurt () cs ! unm ! edu
Ron Brightwell
Patrick Bridges

Many commodity operating systems do not scale well to the number of processors found in today's supercomputers. When running such operating systems, as much as 50% of the system's performance can be spent by the operating system itself. For this reason, many of today's largest supercomputers run highly stripped down operating systems that impose as small an overhead as possible.

Kurt's research seeks to understand exactly how this overhead, termed "OS noise", affects the bottom line performance of various scientific computing applications running on large supercomputers. He is also interested in finding ways to reduce the overheads found in ordinary operating systems in order to make them more suitable for use on large supercomputers.

==========

Distributed Filename Look-up using DNS
Cristian Tapus, crt () cs ! caltech ! edu
David Noblet
Jason Hickey

One of the challenges in building a distributed file system is finding a way to locate the data and meta-data associated with files. Usually, this information is replicated to provide reliability and thus might be distributed across a wide-area network. A centralized directory service is undesirable because it provides a single point of failure and may behave as a bottleneck for the system.

Cristian noted that many of the problems faced when serving file meta-data and data are in fact the same as those solved by DNS. For example, both DNS and FS meta-data are used to resolve names in a hierarchical name space to addresses. For these reasons, Cristian suggested using DNS itself as a localization service for the data and meta-data of files in a distributed FS. Looking up a file would involve making a DNS query for a name like "passwd.etc.mojavefs.caltech.edu". The query would return the addresses of the replica file servers that could serve the named file. Replication of the meta-data is handled automatically by the caching mechanisms built into DNS.

==========

Stealth Attacks on Kernel Data
Arati Baliga, aratib () cs ! rutgers ! edu

Rootkits use an array of impressive techniques to hide themselves from detection. Some go so far as to rewrite portions of the in-memory kernel image to perfect the illusion. New system calls might be added to render the rootkit's processes invisible to "ps". Others carefully manipulate the process lists and file system handlers to evade detection.

Arati is investigating all of the ways in which rootkits can tamper with the running kernel image by solely manipulating kernel data. She hopes to use her findings to develop monitoring systems that can't be easily fooled. In particular, she is looking at attacks that do not employ conventional hiding techniques yet are able to cause stealth damage to the system and evade detection from state-of-the-art integrity monitoring tools.

11:59 AM  
Blogger agmiklas said...

Second round of WIPS summaries.

The final drafts of a few of these haven't been checked by the speakers. I've noted where this occurs.

==========

(Final draft not checked)
AutoBash: Hammering the Futz out of System Management
Ya-Yunn Su, yayunn.su@gmail.com
Jason Flinn

We've all experienced it: a system that isn't quite performing the way we'd like. Maybe the video hardware doesn't set the appropriate resolution when plugging an external monitor into a notebook. Perhaps the wireless card doesn't re-associate with the nearest access-point correctly on wake-up. No matter what the problem, we typically use the same approach to solve it: type the symptoms into Google, find a page that describes a fix, apply the steps described in the fix, and check to see that the problem is corrected. If not, back out the changes, and repeat. This process, which Ya-Yunn termed "futzing", requires a substantial amount of manual intervention and can lead to serious frustration.

AutoBash is a tool that automates the futzing process. When the user notices a configuration error, he describes the symptoms to the tool. AutoBash will search its database for scenarios where a user started with the current configuration, applied some changes, and ended up with a new configuration that satisfies the desired description. Once a record is found, the steps required to adjust the user's configuration will be automatically replayed. If the user is unsatisfied with the result, he will be given the opportunity to have the changes automatically rolled back, and possibly presented with another solution. Finally, should the tool be unable to automatically correct the error, it will watch as the user does so manually, so that other users may benefit from his futzing.

==========

iTrustPage: Preventing Users from Filling out Phishing Web Forms
Troy Ronda, ronda@cs.toronto.edu
Stefan Saroiu

The US economy loses billions of dollars each year to phishing attacks. Worse, phishing erodes the public's trust in the Web as a platform for e-commerce. Troy claimed that many forms used to legitimately gather information originate from well-established Web sites, whereas phishing attacks are usually done from newly created sites. Fortunately, there are a number of services that can be used to estimate the popularity, and thus trustworthiness, of a site. Phishing pages must also appear similar to their targets. While it is difficult to design an algorithm to compare two web pages, Troy mentioned that a person can usually determine with ease if one Web site is mimicking another.

These two key observations from the basis of iTrustPage, a Firefox extension that helps users avoid phishing attacks. When a user tries to fill out a form on a page that is not well established, iTrustPage stops the user from proceeding and asks him to describe the task that he is trying to accomplish. Using this description, iTrustPage makes a query to Google and shows the user the Web sites associated with the first few hits. The user then indicates which website looks most similar to the page he was expecting. Finally, the tool redirects him to the organization's legitimate Web page and away from the phishing attempt. iTrustPage is currently available for download at: http://www.cs.toronto.edu/~ronda/itrustpage/

==========

(Final draft not checked)
Dynamically Instrumenting OS Systems with JIT Recompilations
Marek Olszewski, m.olszewski@utoronto.ca
Keir Mierle
Adam Czajkowski
Angela Demke Brown

Operating systems, like applications, grow more complicated each year. Unfortunately, the techniques and tools used to instrument, trace, and debug kernel-level code have not advanced as quickly as their user-mode counterparts. For example, tools such as Valgrind have made it possible to inject probes into running user-mode code using JIT recompilation techniques. Since these probes are dynamically compiled directly into the surrounding code, they perform much better compared with traditional patch-and-redirect techniques. Unfortunately, JIT instrumentation tools are currently unable to instrument kernel code.

Marek plans to create JIT instrumentation tools that can be used with kernel code. Given the time sensitive nature of much of a kernel, this approach may make it possible to instrument code that previously could not be probed for performance reasons. By bringing the proven benefits of JIT instrumentation to the kernel, Marek will assist systems programmers in better understanding their operating systems, and ultimately will help them to produce more efficient and correct kernels.

==========

(Final draft not checked)
Limits of Power and Latency Reductions by Intelligent Grouping
David Essary, essary@cs.pitt.edu

Disk accesses are a very expensive operation; entire classes of applications are limited by the IO capability of their hosts, rather than its raw processing power. Improving an IO subsystem's ability to quickly respond to requests can greatly improve the overall efficiency of such systems.

David's research seeks to improve storage access time and throughput by carefully controlling how the data is physically laid out on disk. Data can even be stored on multiple drives in an array to give the reading process more flexibility when deciding how to optimally read the data back. These techniques can result in a 70% reduction in disk-related latencies. David also discussed some of his work on predictive retrieval algorithms and compared his system's current performance to the theoretical maximum. Finally, he pointed out that his work to reduce seek operations improves not only performance but also drive power consumption.

==========

Bounded Inconsistency BFT Protocols: Trading Consistency for
Throughput
Atul Singh, atuls@cs.rice.edu
Petros Maniatis
Peter Druschel
Timothy Roscoe

Many protocols exist for ensuring the high availability of a system despite the potential for Byzantine failure of its components. However, these protocols have negative scaling properties: the more nodes that are added, the higher the performance penalties associated with keeping all of the components synchronized.

Atul proposed a solution where replicas return results which differ slightly from the correct result. The key is that the variability of the response is bounded; a client can be sure the true value is within some range of the returned quantity. By allowing the replicas to run slightly out of sync, the overall performance of the system can be improved. This approach can be useful for applications that don't require precise results. For example, it might be acceptable for a disk quota system to allow a user to consume at most 5% more disk space than their allotted quota.

==========

EyesOn: A Secure File System that Supports Intelligent Version Creation and
Management
Yougang Song, ysong@cs.ucr.edu
Brett D. Fleisch, brett@cs.ucr.edu

Versioning file systems are often used to enhance the security capabilities of an operating system. Since they preserve the change history for each file, they can help system administrators both detect intrusions and rollback unauthorized changes. However, maintaining a comprehensive change history can become overwhelmingly expensive in both disk space and performance overhead.

The EyesOn system aims to preserve normal file operations and existing file structures while leaving the complexity of recovery operations to the time they are requested. EyesOn extends the same strategy used by file system journaling to record its in-memory modified data into a log without significant additional processing. EyesOn uses these logs to create file versions that can be used to accelerate the retrieval of a file's change history. Versions are created based on user-supplied predicates that can make use of statistics stored in the log. For example, predicates can use the elapsed time since a file was last modified, the total size of the change, or whether the user has explicitly requested that a snapshot of the file be taken at this time. Two types of versions are created in EyesOn. Normal versions are created for quickly retrieving recent changes and will automatically be culled once they reach a certain age. Landmark versions are created for keeping valuable information for a longer time.

==========

Robust Isolation of Browser-Based Applications
Charles Reis, creis@cs.washington.edu

First it was online e-mail. Next came online scheduling, photo archiving, and journalling. Today, companies have begun testing online word processors and spreadsheet applications. Eventually, it's possible that most applications will be run on racks of systems in far away data centers and served over the Web.

If the future of applications is the Web, than in some sense the future of operating systems is the web browser. While they may never directly interact with hardware, they certainly will fulfill other roles traditionally handled by an operating system. For example, web browsers should ensure that a buggy or malicious script on one site doesn't adversely affect the scripts of another. Browsers should also protect the locally-stored data associated with one site from unauthorized access by scripts from another site. Charles is currently looking at ways to build these types of containment mechanisms into browsers such that changes to the server-side applications are kept to a minimum.

11:39 PM  
Blogger agmiklas said...

Oops, forgot one.
Last, but not least...

==========

Information Flow for the Masses
Max Krohn, traz () pdos ! lcs ! mit ! edu
Micah Brodsky
Natan Cliffer
M. Frans Kaashoek
Eddie Kohler
Robert Morris
Alex Yip

Today's Web sites are serving an increasing amount of user-contributed content. They are no longer places where users go to passively download information, but instead act as meeting points where people can exchange content. For example, consider Wikipedia, which is built on content contributed by its users.

However, sites today do not allow users to contribute to the applications running on them. Wikipedia does not allow anonymous users to patch up the MediaWiki software running on the live servers, as it does with its content. The main reason why this can't be allowed in today's hosting environments is security; a malicious patch could be used to leak ordinarily inaccessible information.

Traz is a novel web-hosting environment that applies Aspestos-like reasoning to Web servers. User-contributed code may be allowed to manipulate data ordinarily inaccessible to the contributing user, but it will not be allowed to leak this information back to that user. Traz therefore allows web sites to safely execute user-provided code against privileged information. Traz runs on ordinary commodity operating systems, and allows developers to write their applications using any programming language.

3:51 PM  

Post a Comment

<< Home