Forensics: New Hardware Quiz: EnCE & CCE Practice

A new computer forensics quiz has been released, this one is based on hardware.  The questions are designed to help people practice for their EnCE and CCE style theory exams.

More quizzes, aimed at people revising, will be coming soon.


Forensic: EnCase Verification, MD5, and Other Myths

Encase is without doubt the most popular forensics tool on the market, however due to the name of one its features, it has also started one of the most common myths. Verification.

When EnCase completes an image it then conducts a “verification” and when it completes, it brings up a variety of hash values, and confirms that the data has “verified”. Excellent. Data verified…no not at all.

The EnCase verification does not check the original data, it check the destination data. This is an often misunderstood point, but one that can be critical.

A very simple test of this, for the doubting Thomas out there, is to simply disconnect the original drive while the verification is being carried out. The verification will still complete, successfully, despite the fact there is nothing to verify against. The reason is this:

The verification checks the image file, it verifies the integrity of the image files, an important process. It does not check if the data imaged is correct – a very important difference.

Example: Company X is at an clients site  imaging hard drives, they are using Tableau write blockers, connected to laptops and imaging to USB drives from a well known brand (inside the USB case is a 3.5 inch 500 GB S-ATA).  The drive to be imaged is an old 2.5 inch IDE drive.

The 2.5 inch drive, an old laptop drive, is taken out of the laptop and connected, via a 2.5 to 3.5 inch converter to the tableau write blocker, which is then connected, via USB, to the the laptop.

The person imaging selects the source drive, the 2.5 inch and sets the destination drive as the USB drive, this means that the data takes the following route.

1) It is read from the old, dusty, 2.5 inch hard drive.

2) It goes out the 2.5 inch pins, into the 3.5 inch converter.

3) From the 3.5 inch converter it goes along an IDE cable.

4) From the IDE cable it goes along to the Tablea write blocker.

5) The black box that converts the IDE to a USB.

6) The tableau then transmitts the data down a USB cable.

7 ) The USB cable connects to the laptop USB port.

8 ) The laptop USB port then connects to the motherboard.

9) The data is then transferred internally, and EnCase then “reads” the data.

10) Encase then “write the data” out and it travels along the mother board to another USB port.

11) From the USB port it goes down a USB cable to the USB drive.

12 ) The USB drive then converts to a 3.5 inch S-ATA drive.

13) The 3.5 inch S-ATA then write the data.

It is not until step (9) that Encase reads the data. It is that data that EnCase then writes, and then verifies what it has written. If it is feasible for an error to occur between 9 and 13, hence the need for the verification, it is also feasible, if not more so, that an error occurs between 1 and 9.

If the hard drive is not working correcty, or the cables are damaged, or the pins are not aligned correctly, or any of a host of other reasons then the hard drive will not image correctly. 99% of the time this error will be a very obvious error, e.g the hard drive will not spin up, or it cannot be seen – which is a good error to have, as it can be addressed.

Sometimes, very rarely but sometimes, the drive will image, but it will be producing junk data, or “skewed” data. While this is rare, it certainly does happen (unlike the theoretical problem of MD5 collisions). i.e. this is a real world problem, not just one confined to labs and mathematics papers.

In the worst case scenario this means that data will be imaged, Encase will read it, write it, and then verify it. The person conducting the image will then leave the scene and state, without intending to lie, that they have a 100% accurate image of the data.When in actual fact they have junk. This can, and does lead to all sorts of problems.

In one case the image of a single hard drive was taken at a “suspects” home, the image was verified and then taken back to the office.  The image was later investigated, from the investigation the examiner concluded that the user had wiped their drive with a tool that deliberately made a mess of the MFT.

What had actually happened is that the image of drive was poor, and much of the MFT was skewed during the imaging process, probably due to bad electronics/electrics somewhere in the imaging process. i.e. they had not taken a good image. But the person investigating the drive did not know/understand this and as a result produce a very detailed report explaining how the drive had been deliberately wiped to hide information.

The suspect/victim of this allegation was fortunate in that the computer was working (and shown to be working) prior to the image being taken and was working after the image being taken; this was, oddly, recorded by the person conducting the image. From this alone it was very obvious that the one and only drive in the computer could not have been wiped. But, in this case a long and detailed report, accusing the suspect/victim of wiping evidence  was submitted. While there was no evidence of the original allegations, the report stipulated, at great length, that the suspect had wiped their drive, and therefore conclusions could be drawn from that. The person writing the report was adamant that the image was correct, because it verified when he wrote the report. Even though he was hundreds of miles from the actual hard drive, the myth of EnCase Verification was so strong, that he believed that the verification guaranteed the quality of the data. A common belief.

A second image was taken, correctly, and the drive examined. From this it could be seen that there was no evidence of wiping, nor evidence of the original allegations. The  suspect/victims statement that that there computer was working were fully corroborated, and they were proved innocent.

Forensics: What is imaging?

What does “imaging” a hard drive mean?

Imaging is the process of taking an exact copy of a hard drive, and is the very foundation of computer forensics, data recovery and electronic discovery processing.  It takes every single 0 and 1 on one hard drive and puts it on another

The imaging process, for most tools, takes an exact copy of each sector, starting at the first sector, Sector 0, then continues until the last sector.

Once a sector is read by the imaging tool it is then written down again onto another media.  Depending on the tool, the settings, and the users requirements, will depend on the how the data is stored.

Generally the options are:

Copy one sector to another sector: Cloning.  In this process each sector is mirrored onto another sector. Sector 1 of the source is copied to sector 1 of the destination, sector 63 is copied to sector 63, etc. At the end of the process the media being written to will be an exact copy of the original drive. In theory you could put the cloned drive into the computer the original computer came from and it would boot successfully. For example, if the original drive is 100 GB, with one 100GB partition)  and the destination media is 250 GB all of the 100 GB would be cloned to the 250 GB drive and the rest of the 250 GB would be blank. If the 250 GB drive was connected to a hard it would state that there was one 100 GB partition, and the remaining 150 GB would be “unused”.  The drive could be navigated and used as if it were the original drive.

As long as the exact number of sectors that have been imaged have been recorded the exact end of the 100GB clone on the 250 GB drive could be demonstrated.  This is a perfectly legitimate method of imaging drives, and historically was the most popular.

Note for this reason the destination drive must be zeroed/blank before the process starts.

Copying to a file: Raw/DD. In this process every sector is copied to another sector on the destination drive, but rather than cloning the data. e.g. Sector 1 is copied to Sector 1,  the data is put into a file. This is a very important difference. Firstly it means that the destination media HAS to be formatted, i.e the imaging drive cannot be completely blank. Secondly it means that you cannot boot a physical machine from the image directly (there are options using virtual machines, mounting the drive, or creating a clone). It is also important to understand that as the data does not have to be sequential or contiguous in a file (as it can be fragmented) the data being written on the destination drive will not be necessarily be sequential.

Example A 40 GB drive is to be imaged to a 250 GB hard drive. The 250 GB drive is formatted with NTFS. The imaging tool is set to create a raw file, called image1.raw, on the destination (250 GB) drve.  Sector 0 of the source drive is read and written to the first sector of image1.raw, sector 1 is then read and written to sector 2 of the file…sector 63 is then writen to the 64th sector of file..etc.  While the sector numbers appear very similar they are not because the first sector of the file image1.raw, could be 1,453,642, and therefore the second sector would be 1,453,643, and the third 1,453,644. As NTFS has the ability to fragment files, the 4th sector could be 2,743,203, or any other available sector. The actual physical sectors on the destination hard drive do not matter because that is handled by the NTFS. This will continue until every sector of the 250 GB drive is completed. The end result is a 100 GB file that is an exact duplicate of the original hard drive, that can be moved between media, across networks, backed up, and examined by tools like Encase, FTK, etc.

The difference between a Raw and DD format is that the latter will chunk up the data into set sizes, so that a single large file does not have to be created. For example, if a 1 TB drive is required to be imaged then the raw image would create 1 1 TB file, which could be problematic. However, if DD is used it will create multiple files of a set size (determined by the user) e.g. the max file size for the DD file could be set to 2 GB. This would mean that 500 2GB images would be created. This would result in image files like this image1.dd.1 image1.dd.2…image1.dd.500. When the DD is opened by FTK,  EnCase, or the like the DD image is then reassembled and the drive is viewed as if it were a raw or clone.

Copying to an image/propietary  file: E.g. E01.

This is the next stage on from an a raw or DD file. In this case when a sector is written down it is not a case of 1 sector to 1 sector, this is for several reasons. Firstly, programs like EnCase allow for compression, this means that muple sectors are compresed into a single sector. This is most effective when imaging hard drives with a lot of blank data. This means that a very large drive can be compressed significantly, an example of this is the Eo1 image created for the NTFS quiz on this site. This is a 40 GB drive, that has been compressed down to a few hundred MB, using EnCase, because most of the drive is blank. In addition to the compression of image files, such as E01, put in a variety of check sums and security features todetect if the files have been tampered with. More information on the E01 file is avaiable here.

Imaging Tools

There are many imaging tools and systems on the market from the boot drive BackTrack which has a DD imaging tool installed and ready to Encase, the most famous/popular/expensive of forensic toosl which can only create E01 files, to FTK Imager, a light weight free imaging tool that can produce E01 Files, RAW, or DD images.

Despite claims of perfect imaging etc, no image tool is really perfect and deals with errors in different ways, this article shows the effectiveness of different imaging tools

Forensics: How to image a hard drive

The video below shows how to image a hard drive using EnCase.

Forensics: What does “File Extent” mean in EnCase?

What does “File Extent” mean in EnCase?

The term “File Extent” in EnCase is reffers to how many different fragments or “data runs” there are for a file. A file that consists of 1 contiguous block of data, i.e one that is not fragmented, will have a File Extent of 1, any file that that a value greater than 1, will be fragmented. [This is for an NTFS file system]

This information is obtained from the MFT which gives exact details about the size, number, and location of all of the data runs associated with a file.

Forensics: What is the MFT Mirror?

What is the MFT Mirror?

The MFT Mirror, seen as $MFTMirror in computer forensics tools, is a partial backup of the MFT. It is not, as is sometimes reported a complete backup of the MFT.

The MFT Mirror contains  a backup of the first 4 NTFS system files:

  • $MFT
  • $MFT Mirror
  • $Log
  • $Volume

The MFT Mirro is designed to allow for as error handling, and can allow for recovery of deleted/wiped partitions.

If the MFT is partially wiped, i.e the first few entries (which somes viruses have done in the past) then the MFT Mirror can be used to rebuild the MFT. EnCase, which is a forensic tool, rather than a data recovery tool,  even has a function to allow for the rebuilding of a partition, using the MFT Mirror (as do other data recovery tools).

The MFT Mirror can be viewed, like the MFT in EnCase, using the correct text styles.

It should be noted, and this is where there is often confusion, the MFT Entry for the MFT Mirror is, as are all files, in the MFT. But the MFT Mirror itself, the actual file, like all other normal files, is out on the hard drive space and not in the MFT.

Forensics: What does “Last Written” mean in EnCase?

EnCase ,one of the most popular forensic tools, can display a variety of dates, including created, written, and accessed.

The two dates which most often cause confusion, for those starting out in computer forensics or a little rusty with EnCase, are “Entry Modified” and the “Last Written”. The Entry modified is covered in a different article, the Last Written date is covered below.

A video showing the recovery of dates from within the MFT is available here

What does the“Last Written” data mean in EnCase

The last written date field in EnCase indicates the date the file was last modified. This should not be confused with the access date, which is when the file was last opened, or the Entry Modified date – which is when the MFT for the file is modified.

The Last Written date is the same as the “Date Modified”  shown in Windows explorer. The two screen shots below show the same file; one seen through EnCase the other through Windows Explorer

Date Modified: Shown in Windows Explorer

Date Modified: Shown in Windows Explorer

Last Written Date: Shown in EnCase

Last Written Date: Shown in EnCase