Corporate Embezzlement, Term Paper Example
Introduction
Discussing computer forensics in the context of law enforcement agencies or in corporate security, it will lead to a conclusion of a subject that covers the utilization of computers to catalog physical evidence that is analyzed in other forensics techniques including biometric identification, analyzing DNA and dental evidence. Current technological trends have revolutionized the methods of storing data along with different advanced access mechanisms. These systems facilitate law enforcement agencies by providing instant access to these characteristics. Although, computer forensics also facilitates in investigation of crimes within themselves in order to gather evidence associated with criminal activities that breaches violation of an organizations policy. The data can be extracted from storage devices including hard drives, flash drives, memory cards etc. (Computer forensics – a critical need in computer, n.d). In this case study, a forensic certified analyst will investigate, as the leader of the digital forensics division. A large corporation in the city has contacted the police for assistance in investigating its concerns that the company Chief Financial Officer (CFO) has been using company money to fund personal travel, gifts, and other expenses. As per the company security director, potential evidence collected thus far includes emails, bank statements, cancelled checks, a laptop, and a mobile device.
The Chief financial Officer may have left logs related to activities that he performs online. This digital traceability can reveal activities that are performed by the user on the Internet by identifying who has identified which files along with logs of each website visited. Temporary files can also reveal flash templates and buffered videos. These traceable logs, files, cookies, templates can facilitate a great deal to analyze crimes that are committed from computers and may provide solid evidence against the hacker or cyber-criminal. However, Chief financial Officer may have deleted files from the hard drive but there are many ways and methods via which these files can be recovered. The operating system usually does not delete complete files from the hard drive, even if the user deletes the files from the recycling bin. The files are still present, until they are replaced or overwritten by new files. These traceability factors can lead to aid in forensic investigations and can track down criminals by investigating their computer. For instance, during the execution of a search warrant at the residence of John Robinson who was a serial killer, law enforcement agencies discovered two bodies that were badly decomposed along with seizing of five computers (Computer forensics, n.d ). After investigating computers, it was discovered that the serial killer John Robinson was using internet to find people to schedule a meeting. Afterwards they were killed by sexually assaulting them. These facts were only possible by forensic computing techniques and were not possible by physical evidence and investigation. However, many techniques are associated with forensic computing, few techniques are categorized in to two groups i.e. Graphical User Interface (GUI) based forensic tools and Command line forensic tools. The command line tools are relatively small, they can be stored in floppy disks as compared to heavy, and slow GUI based forensic tools. However, command line tools also share some disadvantages in terms of their limitations as they are not capable to identify .zip files and .cab files. GUI based tools provide a graphical user interface and is said to be user friendly because specialized knowledge is not required as compared to command line tools requiring commands on every operation. The disadvantage for GUI based tools is that they are large and cannot be saved in a floppy disk. Similarly, organizations also require a proactive approach for threats that may penetrate within the internal network and extracts or expose sensitive information. There are many ways of forensic data acquisition on a network; we will only consider best practices.
Network-Based Evidence Acquisition Practices
Network management is effective on many vital management functions. If any one of them is not properly configured, effective network management is not possible. Data acquisition is classified as a vital management process that needs to be addresses proficiently. Likewise, Wireshark will only utilize data that is available and produce reports that are in the scope of evidence. For instance, it is possible that Wireshark acquires the data in an imprecise manner because in certain cases, there is a replication in data transmission. Therefore, the metrics will not show the correct picture. Acquisition tools are tailored to detect and process various types of network traffic, if any additional traffic is transmitted to the tool, it will overload the process and many packets can be discarded. Moreover, if any tool initially saves the network traffic to process further, packet duplication can further degrade the packet capture process difficult. Data transmission in a network is received from many interfaces and is also transmitted via a single interface representing many to one relationship. This concludes that the buffer can be overrun on an interface available on a switch. Moreover, congestion will result in packet loss from the switch, as a result of discarded packets and consequently, the tool will identify packet loss and incorrect reports and metrics. Best practice is that the port that is replicating data needs to be configured on the module with the largest buffer size. By following this procedure of best practice, likelihood of packet loss that is residing on the switch port will be minimized and packets will be counted appropriately. Furthermore, this memo will address best practices for data acquisition from switches and by integrating required methods for effective filtering and customization. Consequently, by deploying these methods and methodologies of best practices, facilitate accurate illustration of network traffic, perfect metrics, minimized processing power and maximum data storage.
Switch Port Analyzer (SPAN)
As per network dictionary “Switched Port Analyzer (SPAN) is a feature of many managed switches that extends the monitoring capabilities of existing network analyzers into a switched Ethernet environment. SPAN mirrors the traf?c at one switched segment onto a prede?ned SPAN port. A network analyzer attached to the SPAN port can monitor traf?c from any of the other switched ports”. This is a feature that is available in Cisco network devices that gives options for network administrators to copy traffic from a physical layer i.e. port on a switch to another port. Likewise, span ports are configured by a session that includes a source and a destination. The monitor session includes two functions i.e. to monitor source of the session and session monitoring of destination. The monitor session source identifies the ports that are physical present for the SPAN to copy data. Moreover, it also illustrates the direction of the traffic that includes the RX and TX. The monitor session destination will also identify the physical ports that the SPAN will consider for copying data.
The source of the monitor session is composed of three attributes (Expert data acquisition best practice, n.d):
- Monitor session number: Differentiates the monitor session from any others on the switch.
- Monitor session source: Specifies the ports or VLANs from which the SPAN will copy data.
- Monitor session direction: Specifies the monitor session direction: RX, TX, or both (both by default).
Monitor session source defines that the replication of data to the destination includes the source ports that will be associated with L2 or L3 ports. However, both of these ports are usable simultaneously. There is also a constraint which restrict WAN interface to be a representation of a source port. For instance, ATM interface is a good example. Moreover, best practices also restrict the configuration of Ethernet channel ports to be represented as source ports. Furthermore, ports cannot be blended with VLAN to be represented as a source within the same session of monitoring; instead, they will be configured for a physical port or for the VLAN.
When source information is configured by using a VLAN, this process is considered to be a VLAN SPAN. VLAN sourcing includes each and every interface on the VLAN that can be monitored effectively. Likewise, the destination data is composed of two separate categories i.e. (Expert data acquisition best practice, n.d)
- Monitor session number: Differentiates the monitor session from any others on the switch.
- Monitor session destination: Specifies the physical port(s) to which the data will be mirrored.
Destination port caveats: (Expert data acquisition best practice, n.d)
- A destination port can be any physical port, with release 12.1(13)E and later of Cisco IOS, you can configure the destination port to be a trunk port. This allows you to forward VLAN tags to the data collection device for monitoring purposes. This technique can also be used to filter data leaving the destination port with the “switchport trunk allowed vlan” command.
- A destination port can only service a single SPAN session and cannot be an Ether Channel port.
- A monitor session can have up to 64 destination interfaces
Port SPAN
Port span will facilitate separate interfaces to be represented similar to sources, as it is recommended for an environment where access layer switches are installed. The monitoring of sessions should focus on the interfaces that are connecting emails, bank statements, cancelled checks, a laptop, and mobile device business critical applications. By following this best practice, data which is redirected to other servers is not visible to the analyzer and do not struggle on SPAN destination for bandwidth.
Digital Forensics
Network threats are evolving along with different risks associated with it. It is essential for an organization to construct a security framework that will address threats related to computer networks. Likewise, highly skilled staffs, previous threat treatment records and incident management teams are the essential part of this security framework. In a situation where the network is already compromised, it is essential to isolate the infected nodes, in order to restrict the worm from spreading it to the whole network. However, before restricting or counter attacking a breach, it is important to find the source and the nodes that are affected. In the current scenario, network administration is facing a challenge of finding traces of a worm that has breached within the distributed network. A distributed network can be on a broad scale and may involve many enterprise computer networks. Likewise, the currently installed network security controls are bypassed by the worm because distributed traffic anomaly is complex and small to detect. However, combining with multiple small data packets can impose a significant impact, as they all share the same frequency and domain that is already happening in the current scenario. For this reason, a method for detecting threats originating from distributed network was introduced by (Zonglin, Guangmin, Xingmiao, & Dan, 2009). The methodology includes a detection of patterns of distributed network along with network wide correlation analysis of instantaneous parameters, anomalous space extraction and instantaneous amplitude and instantaneous frequency. In the current scenario, network administrators can apply instantaneous amplitude and instantaneous frequency, which is a part of this model, of network transmission signals can invade network unknown patters and categorize them in to frequency and time domains separately. Moreover, they can also deploy an anomalous space extraction methodology that is based on network transmission predictions. This methodology will facilitate network administrators to exceed the boundaries of PCA based methods that are already failed to provide strong correlations. Furthermore, the third component that is a network wide correlation analysis of amplitude and frequency can discover overall network transmission originating from distributed networks, as the current controls are only sensing them in a small amount or quantity.
After determining the exact source of the unknown worm, the next challenge is to analyze the infected nodes within the network. It is obvious that without a specialized tool, it is a daunting or almost impossible task to detect anomalies on low levels i.e. network ports. There is a requirement of pin pointing unknown threat activities within the network, for this purpose, a powerful tool known as Wire shark will serve the purpose. Wire shark is a freeware tool that analyzes network packets and processes them for illustrating detailed contents of the packets (Scalisi, 2010). Moreover, the tool contains numerous features that can facilitate the threat detection process. The first step that a network administrator will take is to identity the type of traffic or ports that needs to be targeted. The second step is to start capturing packets on all ports of all the switches (Scalisi, 2010). However, there is a requirement of modifying port numbers. As per the current scenario, all the network ports will be scanned including the Simple Mail transfer Protocol (SMTP) port. The tool has a feature of only scanning specific ports that needs to be targeted. However, in a corporate network environment that will not be possible, as Intrusion detection system (IDS) and Firewalls may conflict with the tool. Moreover, different subnets on the network will also require complex and time consuming configurations. Furthermore, network administrator can always set the time limit for capturing specific network port data. Therefore, the tool will distinguish increased network activity on each port by constructing real time statistical data along with report after completing the investigation.
Attacks are always intelligent, as the hacker do not want us to track the source, trace back is always difficult. After conducting these two tasks, the third task for the network administrator is to trace the hacker or source of the threat. Network administrators will analyze two fields in a packet header i.e. time stamps and record route. However, these fields are considered by network engineers for various routing problems that may arise. Moreover, one more challenge for network administrators is to maintain a globally synchronized clock throughout the trace back process, as the packet may have travelled from different time zones. A methodology called as packet marking will be used to eliminate these challenges, as it will append the data with fractional information of paths, in order to complete a successful trace back.
Conclusion
Network administrators must encompass several techniques and methodologies for countering Chief Financial Officer’s concerns within minimum time possible. Initially, it is very difficult to trace the CFO activities that has already intruded within the system and continues to spread. Network administrators need to configure network security appliances intelligently. Moreover, after detecting the suspicious activities of CFO, there is a requirement of identifying systems, where the illegal activity is located and continues to spread. However, identification of systems is not enough, network administrators have to identify and trace effected network ports and services. After identifying infected systems and network ports, network administrators can make decisions, as they now come to know the affected systems and the affected network ports and services along with evidence are preserved. Lastly, in order to trace the source, there is a requirement to trace the intruder i.e. CFO illegal activities. This may be a challenging task, as evidence is limited and the time stamps are not always correct. Network administrators can use packet marking and analyze packet header to focus on two areas of interest i.e. time stamps and record route. The preservation of evidence can be handled by isolating the evidence for preventing misuse of the logs and evidences against the CFO of the company. Moreover, chain of custody phenomena will be initiated and certified forensic tools will be executed by certified security forensic professionals. This stage is critical as the evidence needs to be preserved and its integrity should be maintained foe presenting it in the court of law.
Future of Digital Forensic Investigation
During a presentation at Carnegie Mellon University’s CyLab Capacity Building Program, Dr. Roy Nutter differentiated between forensics and security. He concluded that security includes all the theory and mechanism that is required to design protection for people and resources. On the other hand, forensics triggers when any incident occurs. As security incidents are rising, there will be huge demand for forensic computing professionals in future. Moreover, Peterson also concluded that a professional related to forensic computing deals with highly technical subjects and must have patience of a photographer of wild life along with literary skills equivalent to Mark Twain.
References
Computer forensics – a critical need in computer Retrieved 10/23/2011, 2011, from http://www.scribd.com/doc/131838/Computer-Forensics-a-Critical-Need-in-Computer
Expert data acquisition best practice, n.d Retrieved 10/23/2011, 2011, from http://www.scribd.com/doc/53797426/Expert-Data-Acquisition-Best-Practice
Scalisi, M. (2010). Analyze network problems with wireshark. PC World, 28(4), 30-30.
Switched port analyzer. (2007). Network Dictionary, , 469-470.
Zonglin, L., Guangmin, H., Xingmiao, Y., & Dan, Y. (2009). Detecting distributed network traffic anomaly with network-wide correlation analysis. EURASIP Journal on Advances in Signal Processing, , 1-11. doi:10.1155/2009/752818
Time is precious
don’t waste it!
Plagiarism-free
guarantee
Privacy
guarantee
Secure
checkout
Money back
guarantee