It is no secret that nowadays cloud computing is becoming popular with large companies, mainly because they can share valuable resources in a cost effective way. This is just the beginning of the cloud computing, an independent research firm "Forrester Research" expects the global cloud computing market to grow from $40.7 billion in 2011 to more than $241 billion in 2020 (Ried, et al., April 2011). This increasing migration towards cloud computing is resulting in an escalation of security threats, which is becoming a major issue. The cloud computing is even considered to be a "security nightmare", according to John Chambers, Cisco CEO (McMillan, 2009).
The migration of networks and servers over the cloud means that hacking techniques are now aimed at cloud-based servers. According to the Web Hacking Incident Database (WHID, 2011) the Cross Site Scripting (XSS) attacks are currently the second most implemented attacks and they are associated with a 12.58% of the overall attacks on the web. Moreover, XSS attacks can easily pass through Intrusion Detection Systems (IDS), such as Snort, without raising an alert as these systems lack detection for attacks which use hex-encoded values (Mookhey, et al., 2010).
This thesis aimed to detect XSS attacks sent to a cloud-based Webserver by using a honeypot that simulates a fake Webserver onto the cloud computing of Edinburgh Napier University. The honeypot is able to log any interaction with the webserver simulated. By developing specifics Bash scripts, this thesis showed the possibility to detect XSS attacks, through analysis of the honeypot's log file, leading to the generation of ACLs/Snort signatures. ACLs will block the IP addresses source of the attacks and the Snort signatures will block similar packets to enter the network again.
All experiments were done through the cloud of Edinburgh Napier University to ensure that the results are the more realistic possible. The result shows that out of a random set of 50 XSS attacks, the Bash scripts implemented in the honeypot generated 26 Snort signatures. These signatures were implemented into Snort which was then able to detect 64% of the same set of XSS attacks. This is 4% above the acceptable level of True Positive alerts, which should be at least 60% of the total alerts raised (Timm, 2010). Finally, background traffic and XSS attacks were injected into the honeypot at increasing speed, to measure the efficiency of the honeypot in detecting attacks within high loads of traffic. Despite an increasing latency in correlation with the network load speed, HoneyD was able to log/detect the same XSS attacks, as seen previously. However, at 2mbps, the honeypot generated a "segmentation fault" error due to insufficient memory that the CPU could not physically address. The 2mbps load speed was identified to be the breaking point of the honeypot and an unstable interval was established between 1.5-2mbps.
The conclusion drawn in this thesis is that HoneyD coupled with Bash scripts, are successfully able to automatically detect XSS attacks and trigger the generation of ACLs/Snort signatures. Further work could be realised by improving the detecting engine
2011 - Towards a Patient Simulator Framework for Evaluation of e-Health Environments: Modeling, Techniques, Validation and Implementation - PhD Transfer Report
TRADITIONALLY, record keeping of patient data within a healthcare environment has been conducted using paper based systems. With the exponential growth in modern technology, electronic systems and highly sophisticated medical devices have started replacing this traditional method. Furthermore, with the Internet now a common everyday commodity in most developed parts of the world, the healthcare industry has began migrating their systems to take advantages
of modern communication infrastructures. Known as e-Health, this concept is the delivery of healthcare services on a mass scale. It enables cross-communication of medical data along with the delivery of health care services straight from the Internet itself. No longer are healthcare records isolated as e-Health enables the sharing of medical data from all facilities, from the largest hospitals to the smallest drop-in clinics, using distributed computing techniques such as cloud computing. However, with the growth in the development of e-Health, including both platforms and services, the issue of how to evaluate these patient centric environments remains unanswered. Privacy of Data laws make it difficult to use real patient data, whilst live deployment of e-Health environments require overcoming many more legal and ethical requirements. Thus, in this report, it is proposed that the research and devel- opment of a patient simulator framework - using computer based simulation techniques - enables the evaluation of e-Health environments under some early defined metrics including efficiency, reliability, security and scalability. As part of this report, the main aim and objectives of this work is outlined along with a comprehensive literature review of relevant subjects. The proposed novelty of this framework is presented and future work to be conducted is outlined.
2010 - Performance Evaluation of Virtualization with Cloud Computing
Cloud computing has been the subject of many researches. Researches shows that cloud computing permit to reduce hardware cost, reduce the energy consumption and allow a more efficient use of servers. Nowadays lot of servers are used inefficiently because they are underutilized. The uses of cloud computing associate to virtualization have been a solution to the underutilisation of those servers. However the virtualization performances with cloud computing cannot offers performances equal to the native performances.
The aim of this project was to study the performances of the virtualization with cloud computing. To be able to meet this aim it has been review at first the previous researches on this area. It has been outline the different types of cloud toolkit as well as the different ways available to virtualize machines. In addition to that it has been examined open source solutions available to implement a private cloud. The findings of the literature review have been used to realize the design of the different experiments and also in the choice the tools used to implement a private cloud. In the design and the implementation it has been setup experiment to evaluate the performances of public and private cloud.
The results obtains through those experiments have outline the performances of public cloud and shows that the virtualization of Linux gives better performances than the virtualization of Windows. This is explained by the fact that Linux is using paravitualization while Windows is using HVM. The evaluation of performances on the private cloud has permitted the comparison of native performance with paravirtualization and HVM. It has been seen that paravirtualization hasperformances really close to the native performances contrary to HVM. Finally it hasbeen presented the cost of the different solutions and their advantages.
Computers have become very useful tools for work, study and play. Computers can also be used in a more sinister manner; criminals can use computers to extract money and information out of businesses and computer users. They can use software known as Botnets to accomplish these goals. A Botnet is a collection of bots typically controlled by a bot master. A bot is a piece of software that conceals itself on a computer system acting on instructions received or programmed by the bot master(s). Botnets are becoming more elaborate and efficient over time and thus the use of Botnets is growing at an exponential rate, threatening the average user and businesses alike.
The aim of this thesis was to understand, design and implement a Botnet detection tool. In order to perform this task a thesis was produced which provides a detailed analysis and taxonomy of the current botnet threat. This includes botnet operations, their behaviour and how they infect computer systems. Ethical considerations were encountered in this thesis chiefly in relation to securing the virtual environment required for testing, evaluation and analysis of a real botnet. In response to this three Botnets were studied with the intention of creating a 'synthetic bot'. The Botnets studied were Zeus, Stuxnet and, in particular, the KOOBFACE botnet on which the synthetic bot was mainly based; this bot would then be used to evaluate the detection software.
The next stage was to investigate botnet detection techniques and some existing detection tools which were available. A prototype botnet detection software, called 'Bot Shaiker', was designed and implemented. This is in the form of an agent-based application capable of detecting specific botnet activity using network traffic and files located on the computer. Bot Shaiker is written in Microsoft C# .NET, it integrates Snort, an open source IDS, to look for botnet activity on the network and checks Windows firewall and computers registry for traces of botnets. These functions are implemented in an easy to use GUI application or can be a service running on a user's computer.
Using a sandboxed virtual network to evaluate Bot Shaiker and DARPA traffic, the results of the evaluation showed that the network signatures of Snort proved effective and efficient; however, the performance related heavily to the traffic volume. When receiving traffic greater than 80Mbps the performance of Snort decreases significantly which means packets can be ignored. As the application is primarily designed for an end user with access to an average Internet speed which typically falls well below this figure, this prototype would work well in most computer systems. The conclusions suggest that the prototype Bot Shaiker application is able to detect botnet activities from the network and host based techniques.
2010 - An Evaluation of the Power Consumption and Carbon Footprint of a Cloud Infrastructure
The Information and Communication Technology (ICT) sector represent two to three percentsof the world energy consumption and about the same percentage of GreenHouse Gas(GHG) emission. Moreover the IT-related costs represent fifty per-cents of the electricity billof a company. In January 2010 the Green Touch consortium composed of sixteen leading companies and laboratories in the IT field led by Bell's lab and Alcatel-Lucent have announced that in five years the Internet could require a thousand times less energy than it requires now. Furthermore Edinburgh Napier University is committed to reduce its carbon footprint by 25% on the 2007/8 to 2012/13 period (Edinburgh Napier University
Sustainability Office, 2009) and one of the objectives is to deploy innovative C&IT solutions. Therefore there is a general interest to reduce the electrical cost of the IT infrastructure, usually led by environmental concerns.
One of the most prominent technologies when Green IT is discussed is Cloud Computing (Stephen Ruth, 2009). This technology allows the on-demand self service provisioning by making resources available as a service. Its elasticity allows the automatic scaling of thedemand and hardware consolidation thanks to virtualization. Therefore an increasing number
of companies are moving their resources into a cloud managed by themselves or a third party. However this is known to reduce the electricity bill of a company if the cloud is managed by a third-party off-premise but this does not say to which extent is the powerconsumption is reduced. Indeed the processing resources seem to be just located somewhere
else. Moreover hardware consolidation suggest that power saving is achieved only during off-peak time (Xiaobo Fan et al, 2007). Furthermore the cost of the network is never
mentioned when cloud is referred as power saving and this cost might not be negligible.
Indeed the network might need upgrades because what was being done locally is done
remotely with cloud computing. In the same way cloud computing is supposed to enhance
the capabilities of mobile devices but the impact of cloud communication on their autonomy
is mentioned anywhere.
Experimentations have been performed in order to evaluate the power consumption of an infrastructure relying on a cloud used for desktop virtualization and also to measure the cost
of the same infrastructure without a cloud. The overall infrastructure have been split in
different elements respectively the cloud infrastructure, the network infrastructure and enddevices
and the power consumption of each element have been monitored separately. The
experimentation have considered different severs, network equipment (switches, wireless
access-points, router) and end-devices (desktops Iphone, Ipad and Sony-Ericsson Xperia
running Android). The experiments have also measured the impact of a cloud
communication on the battery of mobile devices.
The evaluation have considered different deployment sizes and estimated the carbon
emission of the technologies tested. The cloud infrastructure happened to be power saving
and not only during off-peak time from a deployment size large enough (approximately 20
computers) for the same processing power. The power saving is large enough for wide
deployment (500 computers) that it could overcome the cost of a network upgrade to a
Gigabit access infrastructure and still reduce the carbon emission by 4 tonnes or 43.97% over
a year and on Napier campuses compared to traditional deployment with a Fast-Ethernet
access-network. However the impact of cloud communication on mobile-devices is important and has increase the power consumption by 57% to 169%.
2010 - A COMPARATIVE STUDY OF IN-BAND AND OUT-OF-BAND VOIP PROTOCOLS IN LAYER 3 AND LAYER 2.5 ENVIRONMENTS
For more than a century the classic circuit-switched telephony in the form of PSTN (Public Service Telephone Network) has dominated the world of phone communications (Varshney et al., 2002). The alternative solution of VoIP (Voice over Internet Protocol) or Internet telephony has increased dramatically its share over the years though. Originally started among computer enthusiasts, nowadays it has become a huge research area in both the academic community as well as the industry (Karapantazis and Pavlidou, 2009). Therefore, many VoIP technologies have emerged in order to offer telephony services. However, the performance of these VoIP technologies is a key issue for the sound quality that the end-users receive. When making reference to sound quality PSTN still stands as the benchmark.
Against this background, the aim of this project is to evaluate different VoIP signalling protocols in terms of their key performance metrics and the impact of security and packet transport mechanisms on them. In order to reach this aim in-band and out-ofband VoIP signalling protocols are reviewed along with the existing security techniques
which protect phone calls and network protocols that relay voice over packet-switched systems. In addition, the various methods and tools that are used in order to carry out performance measurements are examined together with the open source Asterisk VoIP
platform. The findings of the literature review are then used in order to design and implement a novel experimental framework which is employed for the evaluation of the in-band and out-of-band VoIP signalling protocols in respect to their key performance networks. The major issue of this framework though is the lack of fine-grained clock synchronisation which is required in order to achieve ultra precise measurements. However, valid results are still extracted. These results show that in-band signalling protocols are highly optimised for VoIP telephony and outperform out-of-band signalling protocols in certain key areas. Furthermore, the use of VoIP specific security mechanisms introduces just a minor overhead whereas the use of Layer 2.5 protocols against the Layer 3 routing protocols does not improve the performance of the VoIP signalling protocols.
2010 - Bonets
Name: Benoit Jacob Programme: BEng (Hons) in CNDS Completed: May 2010 PDF:PDF [Poster]
Botnets are networks of malware-infected machines that are controlled by an adversary are the cause of a large number of problems on the internet . They are increasing faster than any other type of malware and have created a huge army of hosts over the internet. By coordinating themselves, they are able to initiate attacks of unprecedented scales . An example of such a Botnet can be made in Python code. This Botnet will be able to generate a simple attack which will steal screenshots taken while the user is entering his confidential information on a bank website. The aim of this project is firstly to detect and analyse this Botnet operation and secondly to make statistics of the Intrusion Detection System detection rate.
Detecting malicious software in the system is generally made by an antivirus which analyses a files signature and compares it to their own database in order to know if a file is infected or not. Other kinds of detection tools such as Host-based IDS (Intrusion Detection System) can be used: they trigger abnormal activity but in reality, they generate many false positive results. The tool "Process monitor" is able to detect every process used by the system in real time, and another tool "Filewatcher", is able to detect any modification of files on the hard drive. These tools aim to recognize whether a program is acting suspiciously within the computer and this activity should be logged by one of these security tools. However, results from the first experiment revealed that the host-based detection remained unfeasible using these tools because of the multiples of processes which are continuously running inside the system causing many false positive errors.
On another hand, the network activity has been monitored in order to detect, using an Intrusion Detection System, the next intrusion or activity of this Botnet on the network. The experiment is going to test the IDS by increasing network activity, and will include attacks to some background traffic generated at different speeds. The aim is to see how the IDS will react to this increasing type of traffic. Results show that the CPU utilisation of the IDS is increasing in function of the network speed. But even if all the attacks have been successfully detected under 80Mb/s, 5% of the packets have been dropped by the IDS and could have contained some malicious activity. This paper concludes that for this experimental setup which uses a 2.0 GHz CPU, to have a secure network with 0% of packet drop by the IDS, the maximum network activity should be of 30Mb/s. Further development in this project could be to experiment with different CPU performances assessing how the IDS will react to an increasing network activity and when it will start dropping packets. It would allow companies to gauge which configuration is needed for their IDS to be totally reliable with 0% dropped packets or semi-reliable with less than 2% dropped packets.
2010 - Rate based IPS for DDoS
Name: Flavien Flandrin Programme: BEng (Hons) in Security and Forensics Completed: May 2010 Grade: A+ PDF:PDF
Nowadays every organisation is connected to the Internet and more and more of the world population have access to the Internet. The development of Internet permits to simplify the communication between the people. Now it is easy to have a conversation with people from everywhere in the world. This popularity of Internet brings also new threats like viruses, worm, Trojan, or denial of services. Because of this, companies start to develop new security systems, which help in the protection of networks. The most common security tools used by companies or even by personal users at home are firewalls, antivirus and now even Intrusion Detection System (IDS).
Nevertheless, this is not enough so a new security system has been created as Intrusion Prevention Systems, which are getting more popular with the time .This could be defining as the blend between a firewall and an IDS. The IPS is using the detection capability of the IDS and the response capability of a firewall. Two main types of IPS exist, Network-based Intrusion Prevention System (NIPS) and Host-based Intrusion Prevention System (HIPS). The thirst should be set-up in front of critical resources as a web server while the second is set-up inside the host and so protect only this host. Different methodologies are used to evaluate IPSs but all of them have been produced by constructors or by organisms specialised in the evaluation of security devices. This means that no standard methodology in the evaluation of IPS exists. The utilisation of such methodology permits to benchmark system in an objective way and so it will be possible to compare the results with other systems. This thesis reviews different evaluation methodologies for IPS. Because of the lack of documentation around them the analysis of IDS evaluation methodology will be also done. This will permit to help in the creation of an IPS evaluation methodology. The evaluation of such security system is vast; this is why this thesis will only focus on a particular type of threat: Distributed Denial of Service (DDoS). The evaluation methodology will be around the capacity of an IPS to handle such threat.
The produced methodology is capable of generating realistic background traffic along with attacking traffic, which are DDoS attacks. Four different DDoS attacks will be used to carry out the evaluation of a chosen IPS. The evaluation metrics are the packet lost that will be evaluated on two different ways because of the selected IPS. The other metrics are the time to respond to the attack, the available bandwidth, the latency, the reliability, the CPU load, and memory load.
All experiment have been done in a real environment to ensure that the results are the more realistic possible. The selected IPS to carry out the evaluation of the methodology is the most popular and open-source Snort, which has been set-up in a Linux machine. The results shows that system is effective to handle a DDoS attack but when the rate of 6 000 pps of malicious traffic is reach Snort start to dropped malicious and legitimate packets without any differences. It also shows that the IPS could only handle traffic lower than 1Mbps.
The conclusion shows that the produces methodology permits to evaluate the mitigation capability of an IPS. The limitations of the methodology are also explained. One of the key limitations is the impossibility to aggregate the background traffic with the attacking traffic. Furthermore, the thesis shows interesting future work that could be done as the automation of the evaluation procedure to simply the evaluation of IPSs.
2010 - Windows Encyption
Name: Vergin, Adrian Programme: BEng (Hons) in Network Computing Completed: May 2010 PDF:PDF
New versions of Windows come equipped with mechanisms, such as EFS and BitLocker, which are capable of encrypting data to an industrial standard on a Personal Computer. This creates problems if the computer in question contains electronic evidence. BitLocker, for instance, provides a secure way for an individual to hide the contents of their entire disk, but as with most technologies, there are bound to be weaknesses and threats to the security of the encrypted data. It is conceivable that this technology, while appearing robust and secure, may contain flaws, which would jeopardize the integrity of the whole system. As more people encrypt their hard drives, it will become harder and harder for forensic investigators to recover data from Personal Computers. By analyzing Windows encryption, the author intends to produce automated tools to aid investigators in gaining access to this data, as well as contribute to the progression of Windows encryption standards.
Over the course of this document, the author outlines both encryption systems and points out potential vulnerabilities in them. While presenting his findings, the author also provides tips and suggestions on how to use EFS and BitLocker in order to optimize their efficiency and make the best use of their strengths. This project also delivers software solutions designed to help compromise the integrity of these systems.
The ultimate finding of this project is that in order to keep data at rest optimally secure, both EFS and BitLocker should be used in tandem, or they should be used in conjunction with other encryption solutions. Neither of these solutions is completely impenetrable on it's own, but when combined with other forms of encryption, they provide a layer of defense that's sufficiently hard to crack.
2009 - Framework for Network IDS Evaluation
Name: Owen Lo Programme: BEng (Hons) in CNDS Completed: Dec 2009 [2010
prize winner] Grade: A+/ Winner of the best poster Paper published: ECIW 2010 [Paper] PDF: PDF Poster: PDF
There are a multitude of threats now faced in computer networks such as viruses, worms, trojans, attempted user privilege gain, data stealing and denial of service. As a first line of defence, firewalls can be used to prevent threats from breaching a network. Although effective, threats can inevitably find loopholes to overcome a firewall. As a second line of defence, security systems, such as malicious software scanners may be put in place to detect a threat inside the network. However, such security systems cannot necessary detect all known threats, therefore a last line of defence comes in the form of logging a threat using Intrusion Detection Systems (Buchanan, 2009, p. 43).
Being the last line of defence, it is vital that IDSs are up to an efficient standard in detecting threats. Researchers have proposed methodologies for the evaluation of IDSs but, currently, no widely agreed upon standard exists (Mell, Hu, Lippmann, Haines, & Zissman, 2003, p. 1). Many different categories of IDSs are available, including host-based IDS (HIDS), network-based IDS (NIDS) and distributed-based IDS (DIDS). Attempting to evaluate these different categories of IDSs using a standard accepted methodology allows for accurate benchmarking of results. This thesis reviews four existing methodologies and concludes that the most important aspects in an effective evaluation of IDSs must include realistic attack and background traffic, ease of automation and meaningful metrics of evaluation.
A prototype framework is proposed which is capable of generating realistic attacks including surveillance/probing, user privilege gain, malicious software and denial of service. The framework also has the capability of background traffic generation using static network data sets. The detection metrics of efficiency, effectiveness and packet loss are defined along with resource utilisation metrics in the form of CPU utilisation and memory usage. A GUI developed in Microsoft .NET C# achieves automation of sending attack and background traffic, along with the generation of detection metrics from the data logged by the IDS under evaluation.
Using a virtual networking environment, the framework is evaluated against the NIDS Snort to show the capabilities of the implementation. Mono was used to run the .NET application in a Linux environment. The results showed that, whilst the NIDS is highly effective in the detection of attacks (true-positives), its main weakness is the dropping of network packets at higher CPU utilisations due to high traffic volume. At around 80Mbps playback volumes of background traffic and above, it was found that Snort would begin to drop packets. Furthermore, it was also found that the NIDS is not very efficient as it tends to raise a lot of alerts even when there are no attacks (false-positives).
The conclusion drawn in this thesis is that the framework is capable of carrying out an evaluation of an NIDS. However, several limitations to the current framework are also identified. One of the key limitations is that there is a need for controlled aggregation of network traffic in this framework so that attack and background traffic can be more realistically mixed together. Furthermore, the thesis shows that more research is required in the area of background traffic generation. Although the framework is capable of generating traffic using state data sets, a more ideal solution would be an implementation in which allows the user to select certain “profiles” of network traffic. This would serve the purpose of better reflecting the network environment in which the IDS will be deployed on.
2009 - AN INTEGRATED FIREWALL
POLICY VALIDATION TOOL
Name: Richard Macfarlane Programme: MSc in Adv. Networks Completed: Dec 2009 Grade: Distinction PDF:PDF
Security policies are increasingly being implemented by organisations. Policies
are mapped to device configurations to enforce the policies. This is typically
performed manually by network administrators. The development and management
of these enforcement policies is a difficult and error prone task.
This thesis describes the development and evaluation of an off-line firewall policy
parser and validation tool. This provides the system administrator with a textual
interface and the vendor specific low level languages they trust and are familiar with,
but the support of an off-line compiler tool. The tool was created using the Microsoft
C#.NET language, and the Microsoft Visual Studio Integrated Development
Environment (IDE). This provided an object environment to create a flexible and extensible
system, as well as simple Web and Windows prototyping facilities to create
GUI front-end applications for testing and evaluation. A CLI was provided with the
tool, for more experienced users, but it was also designed to be easily integrated into
GUI based applications for non-expert users. The evaluation of the system was performed
from a custom built GUI application, which can create test firewall rule sets
containing synthetic rules, to supply a variety of experimental conditions, as well as
record various performance metrics.
The validation tool was created, based around a pragmatic outlook, with regard to
the needs of the network administrator. The modularity of the design was important,
due to the fast changing nature of the network device languages being processed. An
object oriented approach was taken, for maximum changeability and extensibility,
and a flexible tool was developed, due to the possible needs of different types users.
System administrators desire, low level, CLI-based tools that they can trust, and use
easily from scripting languages. Inexperienced users may prefer a more abstract,
high level, GUI or Wizard that has an easier to learn process.
Built around these ideas, the tool was implemented, and proved to be a usable,
and complimentary addition to the many network policy-based systems currently
available. The tool has a flexible design and contains comprehensive functionality.
As opposed to some of the other tools which perform across multiple vendor languages,
but do not implement a deep range of options for any of the languages. It
compliments existing systems, such as policy compliance tools, and abstract policy
analysis systems. Its validation algorithms were evaluated for both completeness,
and performance. The tool was found to correctly process large firewall policies in
just a few seconds.
A framework for a policy-based management system, with which the tool would
integrate, is also proposed. This is based around a vendor independent XML-based
repository of device configurations, which could be used to bring together existing
policy management and analysis systems.
2009 - Enhanced Event Time-Lining for Digital Forensic
Name: Colin Symon Programme: BEng (Hons) in CNDS Completed: Dec 2009 Grade: TBC PDF:PDF [Appendix]
In a digital forensics investigation, log files can be used as a form of evidence by reconstructing timelines of the computer system events recorded in log files. Log files can come from a variety of sources, each of which may make use of proprietary log file formats (Pasquinucci, 2007). In addition, the large volume of information to be filtered through can make the job of forensic examination a difficult and time consuming task.
The aim of this thesis is to explore methods of logging and displaying event information which is gathered from computer systems, specifically in relation to the collection, correlation and presentation of log information. By means of a literature review, it has been found that by correlating and storing log information in a central log database it should be possible to construct a system which can access this information and present it in the form of a timeline to the investigator. The important contribution that visualisation techniques can bring to log analysis applications has been made by Marty (2008, p.5) by stating that “a picture is worth a thousand log records”.
A prototype system has been produced which makes use of the latest technologies to enhance current methods of displaying log data, such as those employed by the Microsoft Windows Event Viewer. The prototype system, developed using a rapid prototyping methodology, separates the log management process into collection, correlation and storage, and presentation. Through use of a standard XML log format and central storage of log information in a Microsoft SQL Server 2008 database, the prototype aims to overcome the issue of proprietary log formats and the difficulty in correlating data obtained from different sources. A log and timeline viewer application has been developed using C#, Windows Presentation Foundation and .NET Framework technologies, enabling the digital forensics investigator to filter event records and visualise timelines of events by means of bar, line and scatter charts.
Through the means of user evaluation it has been found that the prototype system improves upon the Microsoft Windows Event Viewer from overview and filtering perspectives. By means of technical experimentation, it has been found that there are scalability issues with the way in which the prototype system imports log information contained within XML files, into the database component. The time taken to import log records, of various sizes, into the database was measured. It was found that for files larger than 2MB, the time taken was longer than two users, of the seven who gave feedback on of the system, would be prepared to wait. Further development into the visualisation of timelines has been suggested as the prototype system is somewhat limited in its ability to provide details of the links between digital
2009 - Mitigation of DDoS
Name: Stuart Gilbertson Programme: BEng (Hons) in CNDS Completed: Dec 2009 Grade: TBC PDF:PDF Abstract:
IT Administrators are constantly faced with threats as a result of the Internet
(West, M. 2008). There is a huge range of solutions on the market today to assist
and defend against such threats, but they are either costly or complicated to set
up and configure properly. The assaults circulating on the Internet at this very
moment range from malicious viruses designed to destroy or corrupt data to
technologically advanced robot networks with intelligent coding that allows
them to intercommunicate with each other, and send back captured private data
like bank account numbers and credit card details to the owner of the malware
(Hoeflin, D. et al. 2007).
Distributed Denial of Service attacks are getting more and more advanced as a
result of sophisticated malware being released into the wild. As a result, new
methods of mitigating these attacks need to be designed and developed. This
thesis aimed to read similar related work done and gain an in-depth examination
of DDoS taxonomy, bot nets, malware and mitigation methods. The thesis also
attempted to take this research and use it in a prototype system that could
potentially be used in a real environment to detect and mitigate an active DDoS
attack on a server.
The main aim of this thesis was to analyse current Distributed Denial of Service botnets, malware and taxonomies to determine the best prototype to design and develop that could detect and mitigate an attack on a server. A human verification prototype was developed and implemented that would require user input to validate the visitor to a site was a real person. This system would only trigger if the visitor sent too many requests for the site per minute. If the visitor failed to validate, the system firewalls their IP address.
This thesis and prototype, although slightly inefficient at high load levels, could
potentially help to mitigate a medium-scale DDoS attack on a website. The
prototype does indeed detect a user that exceeds the threshold set, and it does
then forward them on for verification. The prototype also then creates entries that
simulate the IP address of that user being blocked at a firewall level. However,
the prototype fails to appreciate any false positives that may occur. If a user was
to exceed the threshold and then fail validation, their IP address would be
firewalled on the server permanently.
Where is he now: Stuart owns Consider IT providing IT support to businesses in the Edinburgh and surrounding areas. k
2009 - Mobile Out-of-Band Authentication
Name: Ashlef Wagstaff Programme: BEng (Hons) in CNDS Completed: Jan 2009 Grade: TBC PDF:PDF Abstract:
With increasing numbers of broadband connections (Office for National Statistics, 2008) and consumers conducting ever more complex transactions on those connections (Nicholas, Kershaw, & Walker, 2006 /2007), it is imperative that users and services have accountability through proof of identity (Summers, 1997). Yet some proponents argue that given the openness of the internet it may be almost impossible to absolutely prove the identity of a remote person or service (Price, 2006).
Kim Cameron in his argument for Federated Identity states that “A system that does not put users in control will – immediately or over time – be rejected.” (2005) which is also a view echoed by Dean (Identity Management – back to the user, 2006). The aim of the thesis is to argue for a self-authentication factor that is integrated into a Federated Identity infrastructure using an out-of-band loop to a mobile device; this argument is then supported with an implemented proof-of-concept prototype. The prototype and its concept are evaluated in a small usability study and an encryption performance experiment on a mobile device. The results of the usability study show that users feel more comfortable with self-authentication using something physical that they hold and respond to than with a third party verifying information on their behalf. The results also show the encryption needed for end-to-end confidentiality and integrity during the out-of-band communication will affect battery life to a degree. The thesis concludes that there is a sound base for self-authentication from a user perspective and that further user and infrastructure studies will need to be conducted on self-authentication before it is realised in the marketplace. It also found that implementing the prototype was more straightforward for the .Net Compact Framework on the Windows Mobile device than it was using the JavaMe platform.
2009 - Corporate Intranet based upon MS Office SharePoint Server 2007
Name: James Robb Programme: BEng (Hons) in CNDS Completed: Jan 2009 Grade: TBC PDF:PDF Abstract:
The use of intranets has grown exponentially since the mid 1990‟s with the scope and range of applications increasing every year. In 1998 U.S. organisations were reported to spend over $10.9 billion, or a quarter of their web related project spending on intranet projects (Lamb, 2000). By 2001 it was reported that nearly 90% of large organisation had some form of intranet. Nearly a quarter of these organisations intranets had capabilities well beyond the simple publication of news and memos (Stellin, 2001). At present, intranet technologies have been considerably developed since their introduction as static web pages for corporate announcements and notifications. Today, intranets are used as a central hub for global communication and collaboration, as well as a portal for back-end applications and legacy systems.
Microsoft Office SharePoint Server (MOSS) is a one stop solution for collaboration, content management, business process, business intelligence, enterprise search implementation and portal development. MOSS is built upon the Windows SharePoint Services (WSS) and .NET platforms and provides increased functionality and capabilities. By investigating and utilising the components of MOSS this dissertation aims to produce a corporate intranet prototype. The prototype intends to provide a secure platform for collaboration, increased productivity, and reducing administration overheads.
This report examines existing intranet solutions and the evolutionary process they have undergone. The design of the intranet prototype was based upon the existing intranet solutions using the components offered by MOSS. The ability for teams to collaborate was of the upmost importance and the prototype incorporates a number of MOSS features to achieve this. In an attempt to improve productivity the intranet combines multiple content repositories into a single interface. This simplifies the search process drastically allowing users to find relevant content from multiple sources with one click. It is estimated that 18% of corporate material becomes out of date after only 30 days of being produced (McGrath & Schneider, 1997). To remedy this, MOSS allows the implementation of business processes as electronic workflows which can simplify these processes and integrate with the existing intranet framework. To encourage the sharing of information and resources between employees, personal areas for each user were also implemented.
The dissertation finds that whilst the implementation was deemed to be a success through user testing, further research would be required to find more suitable security controls. Furthermore, as little hard coding is required for users to implement web pages, non-technical users could be given control to create their own sites. Allowing the governance of control to be spread allows teams to create sites further adapted to their needs, it also poses significant problems. Non-technical users will invariably have little concept of the security risks posed and may in-fact be compromising the data they are using. Overall, although the prototype created contains a number of flaws. The capabilities of MOSS as an intranet platform have been fully investigated and the implementation shows its potential.
2009 - IDS Evaluation
Name: Julien Corsini Programme: MSc in Adv Networks Completed: Jan 2009 Grade: TBC PDF:PDF Abstract:
Nowadays, the majority of corporations mainly use signature-based
intrusion detection. This trend is partly due to the fact that signature
detection is a well-known technology, as opposed to anomaly
detection which is one of the hot topics in network security research. A second
reason for this fact may be that anomaly detectors are known to generate
many alerts, the majority of which being false alarms.
Corporations need concrete comparisons between different tools in order
to choose which is best suited for their needs. This thesis aims at comparing
an anomaly detector with a signature detector in order to establish which is
best suited to detect a data theft threat. The second aim of this thesis is to
establish the influence of the training period length of an anomaly Intrusion
Detection System (IDS) on its detection rate.
This thesis presents a Network-based Intrusion Detection System (NIDS)
evaluation testbed setup. It shows the setup of two IDSes, the signature detector
Snort and the anomaly detector Statistical Packet Anomaly Detection
Engine (SPADE). The evaluation testbed also includes the setup of a data theft
scenario (reconnaissance, brute force attack on server and data theft).
The results from the experiments carried out in this thesis proved inconclusive,
mainly due to the fact that the anomaly detector SPADE requires a
configuration adapted to the network monitored.
Despite the fact that the experimental results proved inconclusive, this thesis
could act as documentation for setting up a NIDS evaluation testbed. It
could also be considered as documentation for the anomaly detector SPADE.
This statement is made from the observation that there is no centralised documentation about SPADE, and not a single research paper documents the setup
of an evaluation testbed.
2009 - Evaluation of Digital Identity
using Windows CardSpace
The Internet was initially created for academic purposes, and due to its success, it has been extended to commercial environments such as e-commerce, banking, and email. As a result, Internet crime has also increased. This can take many forms, such as: personal data theft; impersonation of identity; and network intrusions. Systems of authentication such as username and password are often insecure and difficult to handle when the user has access to a multitude of services, as they have to remember many different authentications. Also, other more secure systems, such as security certificates and biometrics can be difficult to use for many users. This is further compounded by the fact that the user does not often have control over their personal information, as these are stored on external systems (such as on a service provider's site).
The aim of this thesis is to present a review and a prototype of Federated Identity Management system, which puts the control of the user's identity information to the user. In this system the user has the control over their identity information and can decide if they want to provide specific information to external systems. As well, the user can manage their identity information easily with Information Cards. These Information Cards contain a number of claims that represent the user's personal information, and the user can use these for a number of different services. As well, the Federated Identity Management system, it introduces the concept of the Identity Provider, which can handle the user's identity information and which issues a token to the service provider. As well, the Identity Provider verifies that the user's credentials are valid.
The prototype has been developed using a number of different technologies such as .NET Framework 3.0, CardSpace, C#, ASP.NET, and so on. In order to obtain a clear result from this model of authentication, the work has created a website prototype that provides user authentication by means of Information Cards, and another, for evaluation purposes, using a username and password. This evaluation includes a timing test (which checks the time for the authentication process), a functionality test, and also quantitative and qualitative evaluation. For this, there are 13 different users and the results obtained show that the use of Information Cards seems to improve the user experience in the authentication process, and increase the security level against the use of username and password authentication.
This thesis concludes that the Federated Identity Management model provides a strong solution to the problem of user authentication, and could protect the privacy rights of the user and returns the control of the user's identity information to the user.
2007 - Enhanced Educational Framework for Networking
Teaching and assessing students in the practical side of networking can be achieved through the use of simulators. However the network simulators are limited in what can they can do since the device being simulated is not fully functional and the generation of the exercises always result in the same specification being presented to the student[1, 2]. When the student has finished the exercise they are just presented with a pass or fail mark with no indication of areas of weakness or strength.
The thesis investigates how the Bloom and SOLO learning taxonomies can be used to specify and mark network challenges while using the idea of fading worked examples to design the challenges to lower the cognitive load on the student.
This thesis then proposes a framework that can be used to generate network challenges specifications that changes every time the student attempts it. The challenge can then be solved using an emulation package called Dynamips while a bolt-on package called GNS3 is used to provide the graphical user interface. Once the student has finished the challenge it will then be graded and feedback presented indicating what was correct and incorrect.
The evaluation of the framework was carried out in two phases. In the first phase the performance of the framework was monitored using a windows utility called performance monitor. The performance was measured on Windows XP, Windows Vista and XP running in an emulator. In each instance the performance was deemed to be satisfactory for running on each operating system.
The second phase of the evaluation was carried out by asking students to evaluate the proposed framework. Once the students had finished the evaluation they were then asked to fill in a questionnaire about their experience. From the results of the questionnaire two of the most positive aspects of using the framework was that a fully feature IOS command line interface was available for the students to use and also once they had a mastered a skill they did not have to start from scratch in subsequent exercises reusing skills that had already mastered. However one of the negative aspects noticed from the questionnaire was the number of complex steps that was required to be followed to setup the challenge.
The final implementation of the framework proved the concept of the design, even though all the proposed elements were not implemented. A program was written that generated a challenge with dynamic variables that changed every time it was attempted, Dynamips was used to provide to the student a fully working command line IOS interface and GNS3 was used to provide a graphical user interface. Finally the student was presented with feedback when they had completed the challenge.
2008 -Analysis and Evaluation of the Windows Event Log for Forensic Purposes
Name: Barrie Conda Programme: BEng (Hons) in CNDS Completed: Jun 2008 Grade: PDF:Event Log [Poster] Abstract:
The windows event log is used in digital forensic cases, but unfortunately it is flawed
in many ways, and often cannot be seen as a verifiable method of determining events.
In the past few years there have been a few highly publicised cases where the data that
is contained within the event log is used to successfully secure a conviction. The aim
of this dissertation is to develop a solution that addresses the flaws in the Windows
event logging service. Through research carried out it had been found that it was
possible to disable the event log service. This then allowed for important data to be
modified, such as usernames, computer names, times and dates. It was also noted that
an event log from one machine could successfully be transplanted into another
without any problems. All of these vulnerabilities involved having access to, and
being able to edit, the event log files.
Based upon the research done, an event logging application was developed using C#
and the Microsoft .NET framework. It makes use of RSA and AES encryption and
HMAC hash signatures to improve the integrity of the data. The application is divided
up into three components, an event logger which monitors specific files and folders
within a computer system and sends alerts to the data archiving system in an XML
format, and an event viewer that presents the events in a readable format to the user.
The performances of the symmetric and asymmetric encryption were tested against
each other. It had been found that the symmetric encryption was 800% faster than
asymmetric encryption. Also the HMAC hash signatures were tested to see how long
it would take to do a brute force attack on them. It was discovered that approximately
21,093 keys were processed every second, this was then compared to the key entropy
and how a longer random key would be harder to break.
2008 -SQL Injection Attacks
Name: Ashok Parchuri Programme: MSc in Adv Networks Completed: Oct 2008 Grade: PDF: DeReportection of an SQL Injection Attack Abstract: With rapid improvements in the data transfer speeds in the last two decades the Internet has become a major opportunity to enterprises to advertise themselves and to maintain their data safely in servers to make it more accessible. With the improvement in Internet technologies, malicious activities are threat for enterprises to maintain integrity of their data.
The purpose of this thesis is to capture the data flowing in a network, and to analyze it to find malicious packets carrying SQL injection attacks. It aims to detect these attacks as the attacker can compromise a server simply using the web browser and for it can cause severe damage such as losing data from a database or even changing the values. The literature review section shows that previous methods of detection have used anomaly detection, where as this thesis uses a novel metric-based system, which measures the threat level of a URL. In creating a prototype of the system, an application was created to detect malicious packets using the C# language with Microsoft Visual Studio 2005. WinPcap is used for capturing the network packets that are flowing through the network interface in promiscuous mode. This application was developed based on the idea of capturing the packets and comparing them for the malicious keywords that are used for SQL injection attacks. The keywords, such as SELECT, DELETE, OR, and FROM are assigned with a malicious metric value, along with possible threats in the URL from certain characters, such as „=‟ and a single quote character. When the resulting summation value reaches more than a given threshold, the application alerts the user that it found an injection attack. The thesis presents different weights for these threat elements, and produces an overall threat level.
Several tests have been conducted for analyzing the threshold value. These have been are conducted using over 1,000 URL strings that have been captured from normal traffic, and some have been injected with malicious keywords. It has been found that the application successful captures all the malicious strings, but it also resulted in false positives for strings that are not malicious. For a run of 1,000 URLs, it detected 10 true-positives, and 30 false-positives. It concludes with a critique of the application is made, along with suggesting the future improvements that can make the application to improve performance. For future work, the thesis presents methods that could improve the metric system.
2008 - Distributed Healthcare Framework using Patient-centric Role-based Security Modelling and Workflow
Name: Mat Symes Programme: BEng (Hons) in CNDS Completed: Oct 2008 Grade: PDF:Distributed Healthcare Framework using Patient-centric Role-based Security Modelling and Workflow Abstract: Healthcare professionals are spending less time with patients and more time on administrative duties (Royal College of Nursing, 2008) . This is due to a high bureaucratic demand (Brindley, 2007) on the caring process, patient population and longevity (Fougère and Mérette, 1999) . A patient-centric system uses gathered information and includes the patient in its functional design (IBM, 2006) . Patient-centric requirements have existed in UK healthcare IT since 2000 (Fairway, 2000). Some existing systems cannot be patient-centric. This is because the strategies that shape the requirements for IT systems have changed over time (Mackenzie, 2004) ; therefore, information technology solutions, built in different times, meet different healthcare requirements. The data created in these differing systems can become disparate and less useful (Singureanu, 2005) . Patient information is sensitive; medical healthcare professional roles, such as doctors can only access a patient's health record at appropriate times. Other healthcare professionals must ask for a patient's permission to access their health record whilst other roles in the National Health Service (NHS) are only entitled to non-medical information (Scottish Consumer Council, 2007) . This implies that viewing patient data attributes are only permissible by role.
The aim of this project is to provide a patient-centric prototype distributed system that can demonstrate approaches to reducing complexity through data and interface integration; increasing visibility through relevant role based information targeting; and reducing administrative overhead through electronic workflow.
This report examines the history of IT strategies in the NHS, identifying some of the key aims from 1992 to 2008. It then discusses some of the standards defined to allow differing systems to communicate and highlights some of the existing IT systems in healthcare today.
The system design allows patients to interact with the in same way as healthcare professional, it provides access to personal space that displays tasks for the patient or healthcare professional to complete. Data integration is used to build a patient record from local and disparate data sources. Information targeting allows the patient or healthcare professional to visit an area that only displays information relevant to the person there. Finite State Machine methodologies are used to design an electronic workflow, which maps a business process of making a referral.
Using the Microsoft Office SharePoint Server information management framework, data integration is achieved through XML definition and the gathering of meta-data (Hoffman and Foster, 2007) . Information targeting is achieved through personalised filtering and security permission modelling (Holiday et al. , 2007) ; workflow is accomplished through the application of design; manipulation of the framework and dedicated workflow (Mann, 2007) code libraries built on top of popular ASP.NET web server technology (Walther, 2006).
This project finds that whilst it is possible to implement these approaches in theoretical context, more research is required into the application of such approaches in real world scenarios. In addition this report finds that software boundaries within the framework suggest the capacity for vast record user, security and management (Curry et al. , 2008) , however, research into the factors that affect these boundaries, such as concurrency and healthcare professional/patient activity on such systems, is required in order extrapolate accurate scalability requirements.
2008 - Analysis of QoS in Real Time VoIP Network
Name: Arjuna Mithra Sreenivasan Programme: MSc in Adv. Networks Completed: Oct 2008 Grade: PDF:Analysis of VoIP [PPT] Abstract:
The aim of this project is to identify and analyse different queuing mechanism and mark the traffic flows in real-time VoIP network. A prototype design is created to know the effect of each queuing technique on voice traffic. Voice traffic is marked using DSCP especially Expedited Forwarding (EF) PHB. Using Network monitoring tool (VQ manager) the voice traffic stream is monitored and QoS parameters are measured. QoS parameters are delay, jitter and packet loss. By analysing these QoS parameters, efficiency of each queuing technique is identified.
Experiments are performed on data and voice converged IP network. Voice is being sensitive to jitter, delay and packet loss, the voice packets are marked and queued to analyse four different queuing mechanisms such as Priority Queue (PQ), Weighted Fair Queue (WFQ), Class-Based Weighted Fair Queue (CBWFQ) and Low Latency Queue (LLQ). Each queuing mechanism has their own feature, in PQ higher priority queue has strict priority over lower ones . WFQ provides fair queuing, which divides the available bandwidth across queues of traffic flow based on weights .CBWFQ is an extended from of WFQ, which guarantees minimum bandwidth based on user-defined traffic classes . LLQ is the combination of PQ and CBWFQ.
The outcome of this project is to understand the effect of queuing mechanisms and classifying of traffic. Results obtained from experiment can be used in determining the efficient queuing technique.
2007 - Analysis and Optimization
of Data Storage using Enhanced Object Models in the .NET
Name: ASHISH TANDON Programme: MSc in Advanced Software Engineering Completed: Oct 2007 Grade: Merit (D1) - Distinction PDF:Analysis
and Optimization of Data Storage using Enhanced Object Models
in the .NET Framework [Presentation] Abstract: The purpose of thesis is to benchmark the database
to examine and analyze the performance using the Microsoft COM+
the most commonly used component framework heavily used for developing
component based applications. The prototype application based
on Microsoft Visual C#.NET language used to benchmark the database
performance on Microsoft .NET Framework environment 2.0 and 3.0
using the different sizes of data range from low (100 Rows) to
high volume (10000 Rows) of data with five or ten number of users
connections. There are different type of application used like
COM+, Non-COM+ and .NET based application to show their performance
on the different volume of data with specified numbers of user
on the .NET Framework 2.0 and 3.0.
The result has been analyzed and collected using the performance
counter variables of an operating system and used Microsoft .NET
class libraries which help in collecting system’s level
performance information as well. This can be beneficial to developers,
stakeholders and management to decide the right technology to
be used in conjunction with a database. The results and experiments
conducted in this project results in the substantial gain in the
performance, scalability and availability of component based application
using the Microsoft COM+ features like object pooling, application
pooling, role- based, transactions isolation and constructor enabled.
The outcome of this project is that Microsoft COM+ component
based application provides optimized database performance results
using the SQL Server. There is a performance gain of at least
10% in the COM+ based application as compared to the Non COM+
based application. COM+ services features come at the performance
penalty. It has been noticed that there is a performance difference
between the COM+ based application and the application based on
role based security, constructor enable and transaction isolation
of around 15%, 20% and 35% respectively. The COM+ based application
provides performance gain of around 15% and 45% on the low and
medium volume of data on a .NET Framework 2.0 in comparison to
3.0. There is a significant gain in the COM+ Server based application
on .NET Framework 3.0 of around 10% using high volume of data.
This depicts that high volume of data application works better
with Framework 3.0 as compared to 2.0 on SQL Server.
The application performance type results represents that COM+
component based application provides better performance results
over Non-COM+ and .NET based application. The difference between
the performance of COM+ application based on low and medium volume
of data was around 20% and 30%. .NET based application performs
better on the high volume of data results in performance gain
of around 10%.
Similarly more over the same results provided on the test conducted
on the MS Access. Where COM+ based application running under .NET
Framework 2.0 performs better result other than the Non-COM+ and
.NET based application on a low and medium volume of data and
.NET Framework 3.0 based COM+ application performs better results
on high volume of data.
2007 - Automated Process of Network
Name: Bryan Campbell Programme: MSc in Advanced Networks Completed: June 2007 Grade: Merit (D2) - Distinction PDF:Automated
Process of Network Documentation Abstract: Knowledge of network topologies is invaluable
to system administrators regardless of the size of an enterprise.
Yet this information is time consuming to collect, and even more
so to be processed into easily consumable formats (i.e. visual
maps). This is especially so when the culture within which administrators
operate is more concerned with operational stability and continuity
as deliverables rather than documentation and analysis. The time-cost
of documentation impinges upon its own production. This continues
to be the case although documentation is of increasing importance
to nontechnical personnel in enterprises, and as a compliment/supplement
to network management systems.
This thesis puts forth a framework to largely automate the process
of documenting network topologies. The framework is based on issues
raised in recent research concerning the needs of IT administrators,
and network discovery methods. An application is also described
serving as a proof-of-concept for the central elements of the
framework. This application was realized in the Microsoft Visual
C# 2005 Express Edition programming environment using the C#.NET
language. The compiled result is supported by the .NET Framework
2.0 runtime environment. The application provides for an administrator
to control, through a graphical interface, the sequence of discovering
a network and outputting visual documentation. For testing, Cisco
Systems routers and switches, along with a Microsoft Windows-based
laptop, were used to construct a mock network. Measurements of
the performance of the application were recorded against the mock
network in order to compare it to other methods of network discovery.
Central to the application's implementation is a recognition
that networks are more likely than not to be heterogeneous. That
is, they will be comprised of equipment from more than a one vendor.
This assumption focused the choices about the framework design
and concept implementation toward open standard technologies.
Namely, SNMP was selected for discovery and data gathering. XML
is utilized for data storage. Data processing and document production
is handled by XSL. Built around these technologies, the application
successfully executed its design. It was able to query network
devices and receive information from them about their configuration.
It next stored that information in an XML document. Lastly, with
no change to the source data, HTML and PDF documents were produced
demonstrating details of the network. The work of this thesis
finds that the open standard tools employed are both appropriate
for, and capable of, automatically producing network documentation.
Compared to some alternate tools, they are shown to be more capable
in terms of speed, and more appropriate for learning about multiple
layers of a network. The solution is also judged to be widely
applicable to networks, and highly adaptable in the face of changing
network environments. The choices of tools for the implementation
were all largely foreign to the author. Apart from the prima face
achievements, programming skills were significantly stretched,
understanding of SNMP architecture was improved, and the basics
of these XML languages was gained: XSLT, XPath, and XSL-FO.
2007 - Authorisation and Authentication
of Processes in Distributed Systems
Name: Ewan Gunn Programme: BSc (Hons) in Network Computing Completed: June 2007 Grade: 1st, Winner Young Software Engineer of the Year
award (based on Hons project), 2007. 1st prize for Real Time Award
and Authentication of Processes in Distributed Systems Abstract: Communications over a network from a specific
computer have become increasingly more suspect, with the increase
of various security breaches in operating systems. This has allowed
malicious programs such as worms, trojans, zombies and bots to
be developed that exploit these security holes and run without
the user being any wiser about the infection on their computer.
The current work in the field of anti-virus protection focuses
on detecting and removing any malicious software or spyware from
a computer. This is proving effective, however it is merely a
way of treating the symptoms instead of the illness. This project
presents a hypothesis based on these situations, and attempts
to prove the effectiveness of a protocol developed specifically
to provide preventative measures to stop the spread of malicious
software, based on authentication and subsequent authorisation.
Tools such as encryption, hashing, and digital certificates were
investigated and marked for use in providing the protocol to prove
the hypothesis, and a further investigation took place of the
common principles in security in the computing paradigm such as
the CIA and AAA sets of principles, which provided a specific
context within which a protocol could be constructed. A discussion
was made of the only protocol that was close to a solution to
the hypothesis, Kerberos, along with any usefulness that that
protocol might have in the situations the hypothesis is based
This was followed by a design of a new protocol, consisting of
a methodology of protocol design used heavily in industry
that of communication analysis and finite state machines. A further
proof-of-concept program was designed as well, to provide a facility
to test the effectiveness and efficiency of the protocol. In all
design considerations, the evaluation of such a system was a priority,
and steps were taken at the design stage to provide an easy method
to collect data results.
The system was implemented in a proof-of-concept program using
an open-source alternative to the .NET framework developed by
Microsoft, called mono. This development environment is cross
platform and fully compliant with all versions of .NET provided
by Microsoft, thereby providing a cross-platform solution to the
problem described above. Specific concerns faced in implementation
of such a protocol were raised, and measures taken to overcome
these concerns presented, along with decisions made on options
available in the implementation.
An analysis was made of the efficiency of the resulting system,
by taking measurements of the time taken between request conception
and the subsequent request completion. Baseline measures were
made on this using a simple client/server program developed during
mplementation that had the option of using the system or not,
with the option not to use the system. These were compared to
measurements made of the same system, however with the option
to use the authorisation service enabled. A conclusion and discussion
of the surprising results followed.
Lastly a critque of the project is made, along with a discussion
of a theoretical situation where this system might prove beneficial;
a general discussion on the benefits of promoting preventative
measures for malicious software spread and any further work that
could be carried out specifically on the id.
2007 - Object-tracking in Health
Name: Vinoth Kumar Programme: MSc in Advanced Networks Completed: June 2007 Grade: P5 PDF:Object-tracking
in Health Care Abstract:
At present, wireless sensor networks and radio frequency identification
(RFID) are both emerging technologies with great potential in
seamless applications. Integrating these two technologies would
provide low-cost solution for object identification and tracking
for wide range of applications where, object and location awareness
is crucial and essential.
Healthcare is one of the industries, where object and location
aware applications are most wanted, particularly in enhancing
patient safety and reducing medical errors by patient identification
(Murphy & Kay, 2004). Unfortunately, healthcare environment
is challenging, where radio interferences are restricted and IT
literacy of the staffs are limited. It gets further challenging
in convincing patients of their privacy and ethical
issues that surrounds patient tagging and monitoring.
The aim of this project is to review methods involved in object
identification and tracking in wireless sensor networks and to
design and implement a prototype of wireless enabled framework
for object identification and tracking. It is to mainly address
the needs of healthcare and other similar environments. This project
is perused in collaboration with National Health Service (NHS)
- University Hospital Birmingham (UHB), Napier University, ConnectRFID
and other industry partners. Based on onsite observation of IT
infrastructure, and patient identification problems in hospital
environment at UHB/NHS, a framework for object identification
and location tracking is designed, implemented and benchmarked.
It is developed using state-of-the-art tools and technologies
in Visual Studio .NET, C#, MS SQL Server and ZPL for a variety
of suitable equipments chosen to achieve the needs of this system.
To evaluate, the framework is benchmarked for the limitations
and performance using extensive logging of all activities and
round trip times within parts of the framework.
The database involved is also monitored and measured for its size
in comparison with the number of activities within the framework.
The main outcome of this project is a state-of-the-art framework
that suits the needs of object identification and tracking within
healthcare environment. With minor changes the framework can be
implemented in wide range object and location aware application
and solutions. Two novel achievement of this work is, the framework
is applied for patent protection (Patents, 2006) by NHS MidTECH
on behalf UHB NHS and Napier University and contributed to writing
and acceptance of a paper in International Journal of Healthcare
Technology and Management (Thuemmler et al., 2007).
Vinoth now has a job as a Design Analyst in RFID/.NET ... so
a great result!
2007 - Web-based Cisco IOS Simulator
Name: Zhu Chen Programme: BEng (Hons) in CNDS Completed: Jan 2007 Grade: 2/1
Web link: http://www.gilmoursentry.com PDF:Web-based
Cisco IOS Simulator Abstract:
Many academics have experienced the growing problems of evaluating
students’ network skills, in particular the high-level skills,
using the traditional exam/coursework extensive assessments. These
problems include the time-consuming to mark a large number of
students’ scripts and the resource limitations. The danger
is that unless effective network skills evaluation tools suitable
for academic use are supplied to help the academics to address
these problems, the quality of the teaching is going to suffer
simply because the
academics cannot cope with such large number of students without
significant increase of the resources.
This report describes the design, implementation and evaluation
of a prototyped automated network skills evaluation (ANSE) framework
system, which integrated an online learning environment, network
skills evaluation and student management system into a single
application, offers online assessment delivery, automated network
skills evaluation and student management ability to help academics
in address those problems.
The implemented facilities of the ANSE framework system are provided
through a Webbased interface, which simplifies the setup and allows
easy access for the academics. In addition, the application provides
an online learning environment with a simple and standard-compliance
Web interface accessible from Internet for students use. To achieve
this, the application’s front-end interfaces were developed
as Website and the back-end facilities were implemented using
Microsoft .Net technology and SQL server database.
The implemented prototype ANSE framework system has been implemented
based on the requirements gathered in the literature reviews and
on the shortcomings identified in the researches of the commercial
router simulator tools. The system was evaluated from a variety
of aspects to estimate the contribution of it in assessing large
number of students’ network skills and the suitability of
use in academic environment. A number of technically experienced
networking Ph.D students acted as system evaluators to help
identify the issues related to usability and functionality, and
the online learning environment interface was refined through
the expert evaluation by the project supervisor. The final prototype
system was deployed on the Internet Web server to test its performance
and the evaluation data were presented showing its advantages
From the evaluation results analysis of this project, despite
the functional weaknesses, the implemented ANSE framework system
can be considered as advancement over other commercial router
simulator tools in tackling the problems related to the traditional
exam/coursework extensive assessment in academic environment.
Due to the advantages brought by the Web-based interface, network
skills evaluation and student management abilities.
2007 - Analysis of Image-based Authentication
Name: Lee Jackson Programme: BEng (Hons) in Software Development Completed: Jan 2007 Grade: 2/1 PDF:Analysis
of Image-based Authentication Systems
In recent years, the inadequacies of the traditional text-based
password have been clearly demonstrated. An increase in the need
for security systems on the internet and in organisations has
meant that people now have far more passwords than at any time
in the past. Users of passwords have often become blasé
to their security and will frequently use the same password for
their entire authentication. With increases in intrusions into
computer systems, the password has had to evolve from a simple
dictionary or personal piece of text, to a nonsensical mixture
of upper and lower case characters mixed with numbers. This new
approach to passwords has shown itself to be at odds with the
human brains ability to remember strings. However all studies
conducted on human memory patterns show that the brain is far
more adept at remembering images.
To test this theory we began by looking into human memory patterns
and found that images, faces and text mixed with images all seemed
to offer good results concerning human recollection. Further research
was undertaken into other possible authentication methods of the
future such as biometrics and token-based, as well as looking
into previous incarnations of image or graphical password systems.
Using the theory research and the findings diagnosed from previous
incarnations of the image-based approach, we designed a prototype
that could be used to evaluate the possibility of image-based
authentication being the main security method of the future. We
began by implementing class diagrams to show how the interfaces
would connect. The interface layouts were then designed with the
emphasis on novel solutions to usability. Five experiments were
also designed to test usability, recall, security, methodologies
undertaken, how security conscious users were and whether humans
were able to remember passwords on multiple image-based interfaces.
The application design was then implemented into an image-based
authentication system containing three interfaces. The interfaces
include a picture-based, facial picture-based and a story-based,
which mixes text with images. This application was achieved using
Visual Studio .NET, with C# as the programming language of choice.
The completed Image-Based Password System prototype was then
evaluated using the experiments that had been designed. The results
of the experiments showed that recall levels over the three interfaces
was just slightly under 90%. The best performers were the Story-Based
and Picture-Based interfaces. The results also showed that users
were generally quite able to use an image-based system with little
difficulty. The main finding of the experiments showed that users
were quite able to remember passwords contained on different image-based
interfaces. If image-based authentication was to become the security
method of the future, it is extremely likely that the interfaces
on individual security systems would be different. The experiments
showed that the human brain was able to hold passwords on three
completely different interfaces successfully.
2007 - Generic Firewall Rule Compiler
Many types of systems have different syntax for defining firewall
rules, such as Cisco devices which use ACLs and Linux firewalls
which use net filters (iptables). The aim of this project is to
define a generic firewall syntax, such as the one used in Al-Shaer
(2004), and develop and evaluate a compiler which converts the
generic format into the platform specific syntax. A basic outline
of this has been created by Saliou (2006), and the project will
enhance this into form which can be used in a security framework.
The objectives were:
Develop a Firewall Rule Compiler and Modeller. Define syntax for firewall modelling Develop translator capable of using defined syntax, and
producing Cisco and Linux equivalents
Implement rule crunching and compression • Finding multiple rules that can be replaced by
a single rule through sub netting.
• Find where multiple block/accept rules can be inverted,
i.e. instead of allowing hundreds of machines in a subnet, block
the remaining machines, and allow the entire subnet thereafter. Implement Anomaly discovery • Find repeated rules
• Find rules that are shadowed by other rules
• Remove Anomalies. Implement GUI for easy input
1. Al-Shaer et al (2004), Modelling and Management of Firewall
2. E. Al-Shaer and H. Hamed. “Firewall Policy Advisor for
Anomaly Detection and Rule Editing.”
2006 - Application Layer Covert
Name: Zbigniew Kwecka Programme: BSc (Hons) in Network Computing Completed: June 2006 Grade: 1st class
Winner Young Software Engineer of the Year award (based on Hons
- Runner-up prize, Best Hons project in Scotland [link]
Layer Covert Channel Analysis Abstract:
specification of Internet protocol stack was developed to be as
universal as possible, providing various optional features to
the network programmers. Consequently the existent implementations
of this specification use different methods to implement the same
functionality. This created situation where optional fields and
variables are often transmitted only to be ignored or discarded
at the receiving end. It is considered that transmission of these
fields significantly reduces the bandwidth available to data transfers,
however the redesign of the network protocols from various reasons
is considered impossible at the present time, and this downfall
of Internet protocol stack is silently accepted. Since the optional
fields discussed are of no real value anymore, they are often
left unmonitored. This in turn allows for implementation of covert
Techniques of information hiding in covert channels have been
known some time now. By definition it involves hiding information
in the medium, which is not usually used for any form of information
transfer. For an instance the purpose of the envelope in the standard
mail communication is to enclose the message and provide space
for addressing. However, even if the messages were under strict
surveillance, information hidden under the stamp on the envelope
could go unnoticed to the examiner. This is how covert channels
operate. They use resources often perceived as safe, and unable
to carry data, to hide covert payload.
This dissertation investigated Internet protocol stack and identified
Application Layer as the level most vulnerable to covert channel
operations. Out of the commonly used protocols, SMTP, DNS and
HTTP have been recognized as those, which may carry hidden payload
in and out secure perimeters. Thus, HTTP, the protocol which is
often wrongly perceived as text based information transfer protocol,
due to its innocently sounding name was further investigated.
Since there is no tool available on the market for HTTP monitoring,
a set of test tools have been developed in this project using
C# programming language, which is starting to become a new networking
industry standard for application deployment. The analysis of
the current trends in covert channel detection and the statistic
collected on the current implementations of the protocol lead
to design and implementation of suitable HTTP covert channel detection
system. The system is capable of detecting most of the covert
channel implementations, which do not mimic the operation of HTTP
browser driven by a user. However, the experiments also proved
that for a successful system to operate it
must fully understand HTTP protocol, recognise signatures of different
HTTP implementations and be capable of anomaly analysis.
2006 - ANALYSIS OF REGION-OF-INTEREST
COMPRESSION OVER LIMITED-BANDWIDTH SYSTEMS FOR SMALL-SCREEN
Name: Andrew Jameson Programme: MSc in Advanced Networking Completed: April 2006 Grade: P5 PDF:ANALYSIS
OF REGION-OF-INTEREST COMPRESSION OVER LIMITED-BANDWIDTH SYSTEMS
FOR SMALL-SCREEN DEVICES Abstract:
This dissertation sets out a method of preserving detail inside
the Region-Of-Interest (ROI) in a JPEG image whilst reducing the
file size of the compressed image. Many small-screen devices (such
as legacy mobile phones) have the capability to process JPEG images
but lack the bandwidth available to stream these. This dissertation
will show that by reducing the amount of detail in the Region-Of-Backgrounds
(ROB) it is possible to reduce the file size of JPEG images by
approximately 50%, thus maximising the limited bandwidth.
The process used to achieve this assumes that the level of detail
within the ROB carries less significance and can thus be reduced
in quality. This is the same as implementing a quantisation table
with greater values though one which operates only on a particular
area of an image.
This dissertation shows that this averaging process can be optimised
by considering certain discrete values of pixels and that by a
combination of these discrete values the ROB can be graded from
the boundary of the image towards the boundary of the ROI.
In this manner a standard JPEG image can be processed prior
to compression in such a way that the detail within the ROI is
preserved while the ROB shows progressively less detail towards
the image boundaries. While this process aids the JPEG process
it sits outwith the standardised JPEG compression process and
thus does not require any additional software or hardware to display
an image compressed using this method.
2005 - Distributed Honeypots
Name: Peter Jackson Programme: BEng (Hons) in CNDS Completed: Deb 2005 Grade: Merit PDF:Distributed
The increasing use of computer communication for many day to day
tasks has resulted in a greater reliance on communication networks
such as the Internet. The impact of a serious interruption to
the operation of the Internet may have far reaching and costly
consequences. The Internet has experienced several incidents caused
by network worms, including an almost total shutdown by network
as the result of the Morris worm in 1988. This project covers
the design, implementation and evaluation of a distributed honeypot
system that provides the facilities to centrally log threat information.
A system of this nature may collect information regarding a threat
at the early stages of infection, allowing the possibility of
an effective response to be deployed. A number of software components
have been developed in several programming languages including
C, Perl and PHP. The prototype system run on a Linux based operating
system. Experiments were performed that demonstrated the systems
ability to detect new threats within a short period of their first
2005 - Distributed Honeypots
Name: HUSSAIN ALI AL SEBEA Programme: MSc Completed: June 2005 Grade: Merit PDF:DYNAMIC
DETECTION AND IMMUNISATION OF MAL-WARE USING MOBILE AGENTS Abstract: At present, malicious software (mal-ware) is
causing many problems on private networks and the Internet. One
major cause of this includes outdated or absent security software
to countermeasure these anomalies such as Antivirus software and
Personal Firewalls. Another cause is that mal-ware can exploit
weaknesses in software, notably operating systems. This can be
reduced by use of a patch service, which automatically downloads
patches to its clients. Unfortunately this can lead to new problems
introduced by the patch server itself.
The aim of this project is to produce a more flexible approach
in which agent programs are dispatched to clients (which in turn
run static agent programs), allowing them to communicate locally
rather than over the network. Thus, this project uses mobile agents
which are software agents which can be given an itinerary and
migrate to different hosts, interrogating the static agents therein
for any suspicious files. These mobile agents are deployed with
a list of known mal-ware signatures and their corresponding cures,
which are used as a reference to determine whether a reported
suspect is indeed malicious. The overall system is responsible
for Dynamic Detection and Immunisation of Mal-ware using Mobile
Agents (DIMA) on peer to peer (P2P) systems. DIMA is be categorised
under Intrusion Detection Systems (IDS) and deals with the specific
branch of malicious software discovery and removal.
DIMA was designed using Borland Delphi to implement the static
agent due to its seamless integration with the Windows operating
system, whereas the mobile agent was implemented in Java, running
on the Grasshopper mobile agent environment, due to its compliance
with several mobile agent development standards and in-depth documentation.
In order to evaluate the characteristics of the DIMA system a
number of experiments were carried out. This included measuring
the total migration time and host hardware specification and its
effect on trip timings. Also, as the mobile agent migrated, its
size was measured between hops to see how this varied as more
data was collected from hosts.
The main results of this project show that the time the mobile
agent took to visit all predetermined hosts increased linearly
as the number of hosts grew (the average inter-hop interval was
approximately 1 second). It was also noted that modifications
to hardware specifications in a group of hosts had minimal effect
on the total journey time for the mobile agent. Increasing a group
of host’s processor speeds or RAM capacity made a subtle
difference to round trip timings (less than 300 milliseconds faster
than a slower group of hosts). Finally, it was proven that as
the agent made more hops, it increased in size due to the accumulation
of statistical data collected (57 bytes after the first hop, and
then a constant increase of 4 bytes per hop thereafter).
The Internet is a worldwide computer network consisting
of many globally separate yet interconnected networks. In
a world which is dependant on information exchange, the
Internet has proven itself as an indispensable communications
link. As more computer systems become interconnected via
the Internet, the spread of malicious software has managed
to highlight the inadequacies in the software installed
on those systems and the current protocols in use for global
communication. To highlight the extent of this problem,
the SANS institute predicts that it will take an unprotected
computer running a version of Microsoft Windows only 20
minutes to become infected.
The reality of these ever-increasing threats, which could
harm and affect the viability of this infrastructure, is
an important and timely area for research. This project
explores the modern network based security threats which
threaten to disrupt modern communication channels.
Broadly, current solutions for network protection include
limited automation and user intervention. These techniques,
however, only provide some degree of detection and mitigation
against already known threats. Current network security
is based on the idea that data passing along the communication
channels between nodes can be examined in “real time”
but with some time delay i.e. during active operations.
The latter is the basis for the current major approaches
for network using Intrusion Detection (ID). In addition,
firewall technology allows control over the entry of data
that is deemed suspicious or malicious.
The solution proposed to the problem of network security
in this project utilises the key concept of dynamically
reconfigurable equipment. The system uses malicious data
collected from the network via a Network Intrusion Detection
System (NIDS) that combines with agent technology to interact
with the routing hardware to deploy a response to the threat.
By using the combined function of agent technology and the
intelligence built into this software, the system is able
to log all threat data and provide dynamic reconfiguration
of network hardware in the face of a malware attack. It
is proposed that a test bed be developed to test the functions
and capabilities of this system. Both the mitigation system
and the test bed will provide a framework for future testing,
design and evaluation of such a network security system.
From the results of the experiments conducted on the system,
it can be concluded that it can stop a simulated network
threat. The experiments show that although a malicious threat
can be stopped, more modern examples of worm technology
will be able to penetrate the system. The prototype system
implemented in this project managed to let in on average
197 instances of malicious threats which translates into
4728 bytes of data. When relating these findings modern
worms, it can be seen that the Code Red worm would have
managed to deploy one instance of itself onto the target
network. The Nimda worm would have been stopped as it is
60Kb in size. The slammer worm, though, would have been
able to deploy roughly 12 copies of itself onto the defence
network. Because of this, the eventual role of any such
defence system may be that of damage limitation and backtracking.
By using distributed agent and NIDS technologies, it may
be possible to create a more effective response to such
threats. This would use a combination of distributed sensing
capabilities and feedback from heterogeneous network sensor
and data types. This information in turn would be processed
and analysed by a co-ordination system which would control
and direct any response to an attack.
Winner Young Software Engineer of the Year award (based
on Hons project)
- Runner-up prize, Best Hons project in Scotland [link]
hiding methods can be used by intruders to communicate over
open data channels (Rowland 1996; deVivo, deVivo et al.
1999), and can be used to overcome firewalls, and most other
forms of network intrusion detection systems. In fact, most
detection systems can detect hidden data in the payload,
but struggle to cope with data hidden in the IP and TCP
packet headers, or in the session layer protocol.
This Honours Project proposes a novel architecture for
data hiding, and presents methods which can be used to detect
the hidden data and prevent the use of covert channels for
its transmission. It also presents the method used in creating
a system for Microsoft Windows platforms.
The scenario consists of one user that from his computer
connects to a web server. In fact, the connection is done
to a Reverse Proxy Server (RPS) and this is the one in charge
of connecting with the Web Server, collect the information
requested and returns it to the user. For the user, this
action takes place in a transparent manner as if he had
connected directly to the Web Server. It has to be highlighted
that the RPS does not need to be configured in advanced
by the user. Because of that, the RPS becomes a strategic
middleware piece of software able to analyze and manipulate
the traffic between one user and a server in the Internet
in a discrete mode.
This ability has been used in this project to send covert
messages in the outgoing packets that leave from the RPS
going to the user. Whilst the user is navigating can observe,
through a separate window provided by an application denominated
Covert Viewer, the incoming covert message. In this Honours
Project it has also been developed a network packet sniffer
so the user can observe how the message is transmitted by
the network packets through a technique that will be explained
further later on. In the RPS side, we have basically two
applications: the Data Hiding Intelligent Agent (DHIA) and
the RPS itself. As has been explained above, the mission
of the RPS it is to capture the requests of the user, transmit
them to the relevant server, collect the answer from the
server and sent it back to the user. The DHIA is in charge
of manipulating the outgoing packets to send the covert
message. The technique used by this component is inserting
in the identification field of the IP header (version 4)
the ASCII value of the character that wants to be sent.
As explained further on, the TCP/IP Protocol Suite has some
weaknesses on its design that facilitates the manipulation
of its characteristics. The DHIA allows the configuration,
through a XML File, of sending covert messages to specific
IP addresses that have requested a specific port.
Paper Published: 3rd International Conference on Electronic
Warfare and Security (EIWC)
This report describes the investigation process
used in the study of the use of an agent-based system to
dynamically route network traffic over ad-hoc wireless networks.
The traditional Network and Systems Management approach
relies on a centralised, Client/Server approach where network
performance data is stored in a central location. This centralised
paradigm has severe scalability problems as it involves
increasing amounts of data transfers of management data
as the size of the network increases (Gavalas, D., et al
2001). There are many research projects investigating the
use of mobile agents to decentralise the transfer of network
management data (Gavalas, D., et al 2001, Puliafito, A.,
and Tomarchio, O., 2000, Lee, J.O. 2000, Papavassiliou,
S., et al 2001); the main role of the mobile agents is to
migrate throughout the network collecting data as they hop
from host to host. This approach eases the problem of increasing
traffic congestion but still poses the problem of continued
use of central depositories to hold the data collected.
This approach highlights the benefits of the mobile agent,
which has the ability to migrate from host to host, gathering
data as it travels; unfortunately it fails to combat the
problem of access to local resources for performance measurements
information which could be used to calculate the best routing
path through the network.
As networks are getting larger and substantially
more complex in companies and institutions, the demands
expected of these networks are high. This is indeed the
case in environments that rely heavily on e-commerce and
the use of technology to do their business. Due to the great
reliance upon these networks, the networks must be created,
designed, and more importantly, maintained to a high standard.
This includes the constant resolution of anomalies found
on the network known as faults. To automate the detection
and resolution of faults, fault management systems are designed
to be implemented upon large networks to find and correct
This report proposes a possible fault diagnosis system,
that is distributed across the
network utilising agent technologies to gather data from
several network nodes and
use it to pinpoint potential faults. To implement the backbone
of the system, the
Simple Network Management Protocol is used to communicate
data between both the
distributed agents and a central application. At this central
application is where the
data is processed and displayed to the likes of a network
After several research of several fault detection methods
were analysed, two methods
were found to collect data. One of these was to collect
data from the hardware of a
node, to determine if the hardware was functional, and if
the link was intact.
Furthermore, to determine routing problems, the Internet
Control Message Protocol
packets were captured from every host too. A symptom-aggregation
then be used to divide the data collated into three distinct
model types that would
determine if the problem was: client-server based, server/host
based or an attack.
Wireless networks are now becoming an important element
in mobile networks. These networks can be ad-hoc in style,
and can allow for multiple networks to connect together.
The ad-hoc nature of these networks allow nodes to connect
to others and act as routers and forwarding nodes between
interconnected domain. Routing protocols provide the base
for the routing of data between these wireless nodes. Although
there are many routing protocols at present there is no
definitive routing protocol in place that handles the potentially
dynamic attributes of a wireless ad-hoc network, along with
the need to transfer data in such an environment.
This report describes the identification of potential
problem areas within these routing protocols. Route acquisition,
expiration and maintenance methods used by the routing protocol
are all factors that contribute to the overall performance.
Another major factor that contributes to routing protocol
performance is when wireless nodes move out of the coverage
area of other nodes and thus lose connection, or the opposite,
where other wireless nodes join the network while on the
move and require a dynamic route to a destination; this
is defined as the mobility factor. The main models that
are applicable in real-world wireless topologies are designed
in this report to establish how these factors and varying
metrics affect the routing protocol in different situations.
The multimedia model shows how well the routing protocol
can handle real-world traffic like streaming video and audio,
and the need to try and guarantee Quality of Service, QoS,
for this type of data. The circular model attempts to address
the potential problems with ad-hoc routing where there is
only one destination node, mirroring a real life hotspot
scenario, where receivers on buildings, for example, can
act as a gateway to the Internet for any number of wireless
nodes. Lastly, the dynamic model attempts to address problems
with an extremely dynamic topology where nodes leave and
join the network continually, putting extreme strain on
the routing protocol, and network overheads.
The main objectives of this project were to investigate
these models and appraise their performance after implementation
of the network simulator, ns-2, and investigate which models
work well in which situations. Varying the packet size and
the mobility of nodes showed that the routing protocol performed
poorer under more stressful situations. The added enhancement
of prediction incorporated into the multimedia models almost
guaranteed QoS. Overall the investigation of the main routing
performance factors that affect the routing protocol were
looked at, to help define which routing protocol metrics
The report concludes with suggestions for further work
and conclusions based on the information presented. The
later sections show that routing protocols could be enhanced
with the addition of mobility prediction and changes to
the way routing protocols find and maintain routes. Further
work could include the testing of these models over wireless
networks and with specific routing protocol in mind such
as the proposed AOMDV for instance, or a routing protocol
with the above enhancements.
With the modern implementation of new standards for
high-rate wireless LANs (WLANs), mobile users are promised
the levels of performance, throughput, security and availability
equivalent to those of traditional wired Ethernet. As a
result, WLANs could be on the brink of becoming a conventional
connectivity solution for a broad range of business and
home users. If WLAN’s are to replace or compliment
the traditional wired LANs, a significant question is how
and to what extent wireless technologies can handle data
traffic differently from wired technologies. Issues such
as bandwidth, effective throughput, security, range and
reliability will decide on the future of a wireless LAN.
However, Ethernet technology has evolved and matured over
a long time period and it is regarded as the most popular
LAN technology in use today. Ethernet is admired because
it strikes a good balance between speed, cost and ease of
installation. Business and home users are reluctant to migrate
from their existing, wired LANs because they are confused
over the abilities of wireless equivalent.
This project thoroughly investigated current WLAN technologies
in an effort to shed some light on their current abilities
and their potential for becoming a mainstream connectivity
solution. In particular, the investigation concentrates
on IEEE 802.11b standard and its equivalent counterpart
the 10Mbps Ethernet. Both of these technologies are not
currently the best available in there own categories. However,
enhanced versions of those technologies work in a fundamentally
same manner. Therefore, any conclusions made in this report
can be applied to other respective standards.
Initially the report presented all necessary information
regarding fundamental networking concepts which apply to
both technologies followed by a detailed investigation of
the standards themselves. This was followed by a substantial
number of previously designed experiments which were based
on the findings from initial investigation of the two technologies.
The extent of the experimentation went way beyond many currently
available research papers on the same topic.
Te results indicated some major issues with 802.11b standard
as well as some fundamental differences between two technologies.
Although there have been significant developments in wireless
technology with a maximum air rate of 11Mbps available for
802.11b products, the wireless overheads are still large
and this still limits the throughput. The overall throughput
for the wireless systems that have been tested is ~40% of
the maximum air rate and therefore figures of 4.7Mbps are
obtainable. These figures are approximately the maximum
figures that can be obtained, but the overall throughput
may be lower depending on the environment the system is
operating in, and the number of people using the network
at one time. At the same time, Ethernet has performed significantly
better. Maximum throughput obtainable was 90% of the maximum
bit rate and figures of 8.9Mbps were easily achievable.
However, instability outside the preferred operating regime
is not unique to WLANs. In many cases Ethernet’s performance
degraded as circumstances changed.
At the end of the experiments, a move to 802.11b from an
existing wired network is not recommended unless there us
an overwhelming need to do so. The IEEE802.11b WLAN has
its places, and in those places it performs well. Furthermore,
it should be considered as part of a hybrid solution due
to deployment advantages and the mobility it provides to
users. Future wireless technologies will no doubt improve,
but the limitations of a shared medium and reduction in
throughput with range will still need to be taken into consideration.
The aim of project has been the evaluation of mobile
agents for their usage in the ad-hoc network, especially
related to wireless applications. It integrates with research
being conducted in the School of Computing at Napier, on
the usage of agent-based systems to provide on-demand routing
through ad-hoc wireless networks. This reports presents
a study of mobile agent programming over ad-hoc networks,
and especially wireless ones.
Mobile Agents and wireless networks are two cutting-edge
technologies that will provide enhancements for increased
connectivity and communicability. Mobile agents are application
programs that not only have the ability to move by themselves
from machine to machine autonomously, but also the capability
to interact with their environment. Wireless network technologies
use radio communication networks that have for main goal
the providing of computers, handheld devices or even mobile
phones with rapid connectivity. In a wireless network, any
device can easily join or quit a work group without the
need of a physical connection.
Obviously, these two technologies are very fascinating
but their interest is increased when they are combined.
Indeed, mobile agents, by providing a new paradigm of computer
interactions, give new options for developers to design
applications-based on computer connectivity. Because of
this, they seem particularly well fitted to move around
wireless networks in a more elegant manner than common applications
that were based on static models, such as client-server
In this report, mobile agents were used to build a prototype
of routing system specific to wireless networking needs.
The aim was to be able to dynamically set a path whenever
a computer wants to access resources remotely located over
the wireless network, and so benefit from a greater mobility
to analyse network status directly on the host, rather than
remotely over the network. More mobility must not hide the
fact that bandwidth is an important factor for networking,
thus it is a major focus of for this project.
In general, the project evaluates the main mobile agent
development systems, such as IBM Aglets, Tracy and Grasshopper,
of which Grasshopper was chosen as it supports mobile devices,
such as PDA’s, and it has extensive documentation.
The model uses static agents which interface to local databases,
and mobile agents which migrate around the network and communicate
with static agents. This method enhances the security of
the overall system, as mobile agents are not allowed to
interface directly with the hosts.
Mobile and static agents have been developed using Java
JDK 1.4, and tests show migration timings for differing
sizes of database sizes, and different migration strategies.
Each host uses a JDBC database to store data, and uses a
proxy for asynchronous communication between the static
and the mobile agent. The conclusions outlines the general
benefits of mobile agents, and recommends future work.
This report describes the design, development and
evaluation of a file sharing system that proposes a novel
solution to the shared file security problem. The system
will allow users to share files in a secure manner and comprises
of client and server applications. The client allows users
to connect to a server and shares their files amongst all
other system users. The client also gives users the ability
to search for files shared by the other system users, and
when a file is found it could be transferred from the other
user securely, as the file would be encrypted. The server
allows valid users to connect, records their shared file
list and enables connected users to search this list. The
server authenticates connected users by performing a test
that only a valid user can respond to in the correct manner.
The report outlines research in the area of networking,
including technological backgrounds such as distributed
file system architectures, cryptographic techniques and
network programming methods.
The novel feature of the system is the method used to
address the security problem inherent to all current file
sharing technologies. The system developed uses cryptographic
techniques to implement a framework in which to model system
security, including authentication of system users and the
files that they share. The report defines the strength of
the encryption, and makes recommendations for enhancements.
A major objective of the system is to provide a model
that can be easily scaled with a number of clients. The
tests performed show that the response time of the system
remains fairly linear to the number of concurrent clients.
Additional tests have also shown that MySQL is vastly superior
to Access XP, especially with 10 clients logging on simultaneously.
In this case, MySQL is almost five times faster than Access
The report concludes with recommendations for future work,
such as an addition to the client application requirements,
improvements to the encryption speed and strength, further
performance tests and a test to investigate network traffic
generated by clients.
The aim of this document is to examine the Microsoft
.NET Framework compared it with Sun’s Java. The latest
version of Microsoft’s programming suite, Visual Studio,
incorporates a new foundation known as the .NET framework.
It is designed partly as a way of removing the device dependence
of the programs that use it (as with Java). The differences
in specification from Java mean that systems are going to
have different strengths and weaknesses. This document examines
the difference between the structural features of both systems
and evaluates the effect of these differences on the efficiency
of the produced applications. The effect on the efficiency
of a development team using the system caused by the differences
in some of the important libraries is also examined. As
part of this, the common language infrastructure of .NET
is also examined and the (potential) use of multiple languages
in .NET and Java investigated. An objective of this report
is to draw conclusions as to the strengths of each platform,
and these are drawn together with those gained as an example
application is produced in each system.
The sample application was designed to contain as many
of the important features as possible and to reflect a real-world
enterprise application. This was then created in both .NET
and Java to examine the usability of each. The criteria
used to evaluate the development process for these applications
were decided and conclusions drawn from them in addition
to the overall experiences with each system. The final conclusions
are then summarised.
It was found that .NET is the step forward from Java.
The basic system is more comprehensive and easier to use.
Generally, .Net is easier to use, largely as Java has become
fragmented, with all the add-ons. The report defines a number
of attributes for the frameworks, such as reusability, compatibility,
portability, and so on. The results of this appraisal, is
that, in most cases, .NET provides a better system than
Java. Java, though, scores well in its costs and compatibility,
while .NET does well in team working, documentation, speed
of code, and reusability.
The aim of the project was to develop a router emulator
including configuration. A fully implemented package can
be used to enhance the learning of router programming. The
goals of the project were to deliver the documents and produce
a fully working program that interprets and performs most
of the IOS commands in order to be able to perform a full
router configuration and improve router-programming skills.
In addition, the software will include specifics function
(Open configuration files, Save Configuration files, View
the feedback of the actual configuration). A good understanding
of how the IOS commands are structured and the behaviour
of a physical router are really important in order to build
a software as close as possible of the reality.
A router is a networking component, which fits at a layer
3 of the OSI model. Routers allow data to be passed from
one network to another based on the destination network
address of a data packet. In order to forward data traffic
a router checks the data address, compares with its tables
of IP addresses and forwards the packets to the intended
networking device. Routers are complex devices, which require
commands to program the operation of the router; the configuration
also needs to be implemented within the emulator. An emulator
is as accurately as possible, a representation of the reality.
Unfortunately, expensive hardware is required in order to
set-up router configuration. An emulator is then, a representation
of the real router environment, which can interpret commands
and interpret configurations. Routers include a main IOS
(Internetwork Operating System) to interpret these commands.
The software is intended for people who want to learn or
improve their skills in router programming using the produced
The report shows how software can be developed which emulates
IOS (Internetwork operating system) commands, and is able
to access to other routers on the same network and affect
the IP addresses and subnet masks. Another aspect of the
emulator is that it includes a graphical representation
of the commands typed, in order to provide a better feedback
of the configuration performed and enhance router-programming
skills. The developed software also allows router configuration
information, and the command sequence used, to be saved,
and recalled at a later time. To enhance system compatibility
the router emulator uses Java, and Java Swing for the graphical
Most of the IOS commands are able to be interpreted and
emulated in order to affect the IP addresses and subnet
masks, affect the different passwords inherent to router
programming, resolve hostnames, select and implement the
routing protocol, and so on. Java has been selected from
other object-oriented programming languages as it provides
access to an excellent GUI (Graphical User Interfaces),
using Java Swing components. Furthermore Java is powerful
and is platform independent, which is a great advantage
for educational purpose as the emulator is able to run over
a wide range of platforms.
An important element in the development of the software
is the structured design technique used. The report reviews
several of the main types, including waterfall model, evolutionary
development, and Boehm’s spiral model.
The technique used in the development is the Boehm’s
spiral model for its good visibility of the process, the
possibility of adapting the phases according to the requirements
and its allows the possibility of creating the prototype
and re-cycling in order to redefine the requirements, the
development, the test, until the final software is developed.
The report also studies the behaviour of a real router
(based on observation and analysis of the Cisco laboratory
in Napier University using five Cisco Routers) and evaluators
some existing emulators. Along with this user trails have
been carried out on the operation of the software, and these
are used to appraise the merits of the package as apposed
to other similar packages.
Finally the report concludes with a comparison with different
emulators currently existing on the market and an evaluation
by intended users, which have agreed that the dynamic graphical
representation of the configuration and the extra facilities
provided help router programming skills a great deal. The
software reach the objectives stated in the introduction
were to create an emulator of a router which implements
major I.O.S. commands included into real router configuration.
A future work might include a wider evaluation by intended
users using H.C.I. tools. A full set of IOS commands could
be implemented for this software in order to produce a powerful
emulator fully compliant with the latest versions of IOS
produced by the different router manufacturers. As the networking
and more generally computing area is evolving so fast that
the software needs to adapt to these constant-moving technologies
by the different improvement and changes made in software
and hardware architecture.
Mobile agents are often presented as the future
of distributed computing. They introduce a new approach
to the traditional client/server architecture, which hides
the network complexity to the end-user and makes data transfers
asynchronous. This is more and more appreciable in a world
where the overall network structure is dynamic, due to the
mobility of computer component themselves like mobile phones,
and the fact that servers providing new services appear
everyday on the Internet while some other disappear.
Mobile agents have tremendous potential, and are subject
to a lot of research at the moment, even if applications
of it are not yet wide spread. Agent technology is up to
now only use by academia and a few industrials. But they
are expected to become more popular in the years to come.
Their application includes user tracking, improved client-server
communications, and for auditing purposes like network monitoring.
This report presents an application of mobile agents under
the mobile agent system Tracy. Its main components are databases,
mobile agents, and stationary agents to interface services
with the mobile agent system.
Gathering data over a distributed system, if not organized,
requires a client to connect and query every server on the
distributed system. The application developed here collects
data without having to know which server to query: agent
technology is in charge of the distribution and collection
of data. This data is taken from databases, and extraction
is filtered by SQL queries.
This document reports the rigorous testing of the Tracy
environment, with details on how the application developed
with it was designed and implemented, what it really does,
and how well it performs.