research projects-computer science-cloud computing



Ad-Hoc Clouds (Prof. A. Dearle & Dr G. Kirby)
The idea of an ad-hoc cloud is to deploy cloud services over an organization’s existing infrastructure, rather than using dedicated machines within data centres. Key to this approach is the ability to manage the use of computational and storage resources on individual machines, by the cloud infrastructure, to the extent that the cloud is sufficiently non-intrusive that individuals will permit its operation on their machines. This project will investigate how this may be achieved.

Specifying, Measuring and Understanding High-Level Cloud Properties (Dr G. Kirby & Prof. A. Dearle)
Implementers and users of cloud services may wish to consider various high-level emergent properties of those services. For example, the degree of replication of data items and the physical locations of the replicas both affect data resilience. The degree of replication of service processes and the fail-over mechanisms affect service availability. The management of computations on individual machines affects overall resource utilization and various QoS properties. The aim of this project is to develop and evaluate techniques to allow desired high-level properties to be specified, mapped into appropriate low-level actions, and the results to be measured and reported in terms of the high-level properties.

Harvesting Unused Resources (Prof. A. Dearle & Dr G. Kirby)
The aim of this project is to investigate how underused computing resources within an enterprise may be harvested and harnessed to improve return on IT investment. In particular, the project seeks to increase efficiency of use of general purpose computers such as office machines and lab computers. As a motivating example, the (small) University of St Andrews operates ten thousand machines. In aggregate, their unused processing and storage resources represent a major untapped computing resource. The project will make harvested resources available in the form of ad-hoc clouds, the composition of which varies dynamically according to supply of resources and demand for cloud services.

An Experimental Laboratory in the Cloud (Prof. I. Gent)
Computer Science is highly suited to experimental science. Unfortunately, many Computer Scientists are very bad at conducting high quality experiments. The goal of this project is to make experiments better by using the Cloud in a number of ways. The core idea is that experiments are formed as artifacts, for example as a virtual machine than can be put into the cloud. For example, a researcher might want to experiment on the speed of their algorithms in different

performance
free download
Computer systems are rapidly changing. Over the next few years, we will see wide-scale deployment of dynamically-scheduled processors that can issue multiple instructions every clock cycle, execute instructions out of order, and overlap computation and cache misses

s. They would make a number of different virtual machines containing each version, which would be sent to the cloud, the experiment run, and the results collected. As well as the results of running the experiment being stored in the cloud, the experiment itself is also there, making the cloud into an experimental repository as well as the laboratory. This enables reproducibility of experiments, a key concept that has too often been ignored. While using the cloud, the project can feed back into research on clouds by investigating how experiments involving the cloud itself can be formulated for use in our new Experimental Laboratory.

Computational Group Theory with Map-Reduce (Prof. S. Linton)
The MapReduce skeleton, introduced by Google to provide a uniform framework for their massively parallel computations is proving remarkably flexible, and is being proposed as a uniform framework for high peformance computing in the cloud. This project would investigate a range of problems in the established area of computational abstract algebra in order to see whether, or how, they can be effectively parallelised using this framework.

Data migration in the cloud (Prof. I. Sommerville)
The cost and time to move data around is currently one of the major bottlenecks in the cloud. Users with large volumes of data therefore may wish to specify where that data should be made available, when it may be moved around, etc. Furthermore, regulations, such as the data protection regulations, may place constraints on the movement of data and the national jurisdictions where it may be maintained. The aim of this project is to investigate the practical issues which affect data migration in the cloud and to propose mechanisms to specify policies on data migration and to use these as a basis for a data management system.

Socio-technical issues in cloud computing (Prof. I. Sommerville)
The aim of this project is to investigate how a migration of applications may result in changes to the way that work is actually done. We know from many years of ethnography that work practice in a setting evolves to reflect the systems and culture of that setting and that people develop work-arounds to cope with system problems and failures. How might current work-arounds change when the system is in the cloud rather than locally provided? Do the affordances of systems in the cloud differ from those that are locally provided? What ‘cloud-based systems’ (e.g. Twitter) might be used to support new kinds of work around and communications.

This project is suited to a student who has an interest in human, social and organisational issues as well as technical Modern heuristic techniques for combinatorial problems. Advanced topics in computer science
free download

Decision Tree and Ensemble Learning Based on Ant Colony Optimization Google Books Result Modern heuristic techniques for combinatorial problems . Faculty library sciences Series: Advanced topics in computer science series; Alternative call (PDF) Modern Heuristic Search Methods

Software Agents Michael R. Genesereth Logic Group Computer Science Department Stanford University
free download

The software world is one of great richness and diversity. Many thousands of software products are available to users today, providing a wide variety of information and services in a wide variety of domains. While most of these programs provide their users with significant Classical complexity theory analyzes and classifies problems by the amount of a resource, usually time or space, that is required by algorithms solving them. It was a fundamental idea, going back to the work of Hartmanis and Stearns in the early 1960s, to measure the required

The combinatorics of network reliability, International Series of Monographs on Computer Science
free download

This book develops combinatorial tools which are useful for reliability analysis, as demonstrated with a probabilistic network model. Basic results in combinatorial enumeration are reviewed, along with classical theorems on connectivity and cutsets. More developed

Languages and machines: an introduction to the theory of computer science
free download

Languages and machines : an introduction to the theory of computer . Theory of Computing 2015/2016 (FUB MSc in Computer Science ) Languages and Machines, which is intended for computer scientists in the theoretical foundations of their subject, gives a mathematically sound In the late 1960s and early 1970s, the concepts of graph transformation and graph grammars started to become of interest in picture processing and computer science . The main idea was to generalize well-known rewriting techniques from strings and trees to Parsing is the process of structuring a linear representation in accordance with a given grammar. This definition has been kept abstract on purpose to allow as wide an interpretation as possible. The linear representation may be a sentence, a computer

The Cambridge distributed computing system. International computer science series
free download

The Development of Component-based Information Systems Google Books Result The Cambridge Distributed Computing System is an early discontinued distributed operating system, . (International computer science series) Bibliography: p. The 100 Best Computer Science Programs in the

Logic and the challenge of computer science
free download

Nowadays computer science is surpassing mathematics as the primary field of logic applications, but logic is not tuned properly to the new role. In particular, classical logic is preoccupied mostly with infinite static structures whereas many objects of interest in

Computer science unplugged
free download

For kids ages 7 to 14: This educational 50 minute show is an action-packed, zany time exploring neat ideas in computer science . It has been performed for over 20 years in classrooms, science museums, science festivals, and at educational events. Kids, and the

The Profession of IT, Is Computer Science Science
free download

COMMUNICATIONS OF THE ACM draw on the same fundamental principles. In 1989, we used the term computing instead of computer science , mathematics, and engineering. Today, computing science, engineering, mathematics, art

Discrete mathematical structures with applications to computer science
free download

The objectives of the course are: To develop Professional Skills through effective communication To introduce a number of Mathematical Foundation to be serving as tools even today in the development of theoretical computer science To gain some confidence on

Bringing computational thinking to K-12: what is Involved and what is the role of the computer science education community
free download

When Jeanette Wing [13] launched a discussion regarding the role of computational thinking across all disciplines, she ignited a profound engagement with the core questions of what computer science is and what it might contribute to solving problems across the

Modern DC-to-DC Switchmode Power Converter Circuits (Van Nostrand Reinhold Electrical/ Computer Science and Engineering Series)
free download

As each area of technology with a potential for significantly impacting any major segment of the electronics industry evolves, it often is accompanied by the development of a succession of new circuits. Each new circuit indeed appears different, employing different components

Information technology research: A practical guide for computer science and informatics
free download

Information Technology Research: A Practical Guide for Computer

Scientific methods in computer science
free download

ABSTRACT This paper analyzes scientific aspects of Computer Science . First it defines science and scientific method in

Active learning and its use in computer science
free download

Student learning and the depth of the students knowledge increase when active learning methods are employed in the classroom. Active learning strategies are discussed in general computer science course work and as used in a theory of computation course. Difficulties

Why the high attrition rate for computer science students: some thoughts and observations
free download

1. Introduction At our university, there are over four hundred declared majors in Computer Science . Each semester, however, only about fifteen to twenty students graduate in this field. The freshman courses comprise overflowing multiple sections, but the upper level courses make

Form and content in computer science
free download

The trouble with computer science today is an obsessive concern with form instead of content. No, that is the wrong way to begin. By any previous standard the vitality of computer science is enormous; what other intellectual area ever advanced so far in twenty years

Why women avoid computer science
free download

COMMUNICATIONS OF THE ACM did these numbers drop, and why more sharply for women than for men For men, the explanation is obvious. Traditional paths to wealth like law, medicine, and business are more certain, and over the
. Fieldwork with industrial partners may be involved and an excellent command of spoken and written English is essential.

Measuring the cloud (Dr T. Henderson)
As soon as electricity became purchased as a utility (rather than being generated locally), electricity meters were developed to measure usage. Similarly, if utility computing is going to become popular, suppliers will need to measure usage for provisioning, and users will need to measure usage to determine if they are receiving the service for which they are paying. The aim of this project is to develop new
techniques and metrics for measuring cloud computing performance and verifying service-level agreements within clouds, which may be difficult when the cloud itself is designed to be “invisible”.

Social cloud computing (Dr T. Henderson)
Many web2.0 and applications of cloud computing have a social aspect to them e.g., groupware, e-mail, virtual worlds. The aim of this project is to investigate the use of social network analysis to
improve such applications. For instance, is it possible to determine where to cache data depending on which members of a social network are more likely to access particular information?

Mobile data archiving in the cloud (Dr T. Henderson)
Data archives such as CRAWDAD (http://crawdad.org/) aim to archive terabytes of wireless and mobile network data that are used by thousands of researchers around the world. More recent projects such
as Movebank (http://movebank.org/) further this by allowing researchers to collect data in real-time and conduct analysis on the data archive’s servers. The aim of this project is to investigate the
use of cloud computing for mobile network data archiving: this will a variety of topics in distributed systems including network measurement, privacy, anonymisation/sanitisation, data protection and computation caching.

Abstractions of Cloud Computing (Dr K. Hammond)
The MapReduce progamming model has proved to be highly successful for implementing a variety of real-world tasks on cloud computing systems. For example, Google has successfully demonstrated the use of MapReduce to automatically parallelise computations across the large-scale clusters that form the basic components of cloud computing systems [1] and IBM and Google are jointly promoting the Apache Hadoop system for University teaching/research. The key to this success is to develop powerful abstractions, or skeletons, that capture the essence of a pattern of computation, and which allow software algorithms to be plugged in to perform the required computations. By doing this, we can separate *what* the cloud computing system is required to do, from *how* it achieves that effect. This allows tremendous flexibility in placing computations within a cloud, including automatic mapping and reconfiguration to match dynamically changing cloud computing resources.

This PhD project will investigate new, advanced high-level programming models for cloud computing that will be even more powerful than Google’s MapReduce. They will apply to a wider variety of problem, and deliver even greater parallel performance. This will be achieved by applying ideas from functional programming to cloud computing, creating very high levels of abstraction, including nested, hierarchical structures that can be mapped to cooperating groups of parallel computing clusters.

[1] MapReduce: Simplified Data Processing on Large Clusters. Jeffrey Dean & Sanjay Ghemawat, Communications of the ACM, January 2008, 51:1, pp. 107-113

Cloud Security (Dr I. Duncan)
A major concern in Cloud adoption is security and the US Government has just announced a Cloud Computing Security Group (Mar 4 2009) in acknowledgement of the expected problems such networking will entail. However, basic network security is flawed at best. Even with modern protocols, hackers and worms can attack a system and create havoc within a few hours. Within a Cloud, the prospects for incursion are many and the rewards are rich. Architectures and applications must be protected and security must be appropriate, emergent and adaptive. Should security be centralized or decentralized? Should one body manage security services? What security is necessary and sufficient? How do we deal with emergent issues?

There are many areas of research within the topic of Cloud Security from formal aspects to empirical research outlining novel techniques. Cloud Privacy and Trust are further related areas of potential research.

Cloud VV&T and Metrics (Dr I. Duncan)
Verification, Validation and Testing are all necessary to basic system evaluation and adoption but when the system and data sources are distributed, these tasks are invariably done in an ad hoc or random manner. Normal test strategies for testing code, applications or architecture may not be applicable in a cloud; software developed for a non distributed environment may not work in the same way in a cloud and multiple thread, network and security protocols may inhibit normal working. The future of testing will be different under new environments; novel system testing strategies may be required to facilitate verification and new metrics will be required to describe levels of system competence and satisfaction.

There are many areas of research within the topic of Cloud VV&T from formal verification through to empirical research and metric validation of multi part or parallel analysis. Testing can be applied to systems, security, architecture models and other constructs within the Cloud environment. Failure analysis, taxonomies, error handling and recognition are all related areas of potential research.

Constraint-Based Cloud Management (Dr I. Miguel, Prof. A. Dearle & Dr G. Kirby)
A cloud may be viewed as comprising the union of a dynamically changing set of cloudlets, each of which provides some particular functionality. Each cloudlet runs on a potentially dynamically changing set of physical machines. A given machine may host parts of multiple cloudlets. The mappings between cloudlets and physical resources must be carefully managed in order to yield desirable high-level properties such as performance, dependability and efficient resource usage. To be practical, such management must be automatic but producing timely high-quality management decisions for a cloud of significant scale is a difficult task. The aim of this project is to apply constraint programming techniques to solve this problem efficiently.

The Green Cloud (Prof Saleem Bhatti)
Cloud computing requires the management of distributed resources across a heterogeneous computing environment. These resources typically are, from the user viewpoint, “always on”. While techniques exist for distributing the compute resources and giving a viewpoint of the user of “always on”, this has the potential to be highly inefficient in terms of energy usage. Over the past few years there has been much activity in building “green” (energy efficient) equipment (computers, switches, storage), and energy efficient data centres. However, there has been little work in trying to model and demonstrate a capability that allows a heterogeneous, distributed compute cloud to use a management policy that also tries to be as energy efficient as possible. This project will explore the use of virtualisation in system and network resources in order to minimise energy usage whilst still meeting the service requirements and operational constraints of a cloud.

Cloud Based Virtual Worlds (Colin Allison and Alan Miller)
Although Virtual Worlds such as Second Life are immensely popular (they have over fifteen million registered users) the performance of a region deteriorates quickly with density and activity of participants. This deterioration manifests itself in significant delays in updating perspectives, rendering objects in the shared environment, movements of avatars and responsiveness to user initiated actions. Indeed, once an “island” has over fifty avatars on it can become unusable. To what extent can Cloud Computing address this problem of dynamic provision of resources? In practice this research would use Open Sim, the open source version of Second Life, as the basis of experiments. One of the experiments may be to compare a federated set of Open Sim grids explicitly hosted on separate installations, with a single large grid, hosted on the Cloud.

Denial of Service Issues in Cloud Computing (Colin Allison and Alan Miller)
As the Cloud offers dynamically provisioned resource allocation, what happens under denial of service attacks? Does the Cloud simply keep wasting more and more resources? Are there novel forms of DoS that are particularly dangerous for Clouds? Can denial of service protection be built into the Cloud, or must it be dealt with, as at present, at the Internet level?

Can Clouds really meet highly dynamic variations in demand effectively and efficiently? (Colin Allison and Alan Miller)
Technology enhanced learning environments such as Finesse have always suffered from unpredictable and sporadic peak demands that are several orders of magnitue greater than their normal load. Due to their interactive nature it is essential that the extra resources are quickly allocated, and due to potential issues of cost, it is also essential that such jumps in resource allocation are quickly released when peaks subside. We have an analytical model and a set of measurements going back to 1998 on this topic. This project would involve revising the analytical model to accommodate Cloud Computing and carry out experiments and measurements, to compare the responsiveness with earlier work done on Web and Grid computing.

More details




CSE PROJECTS

FREE IEEE PAPER AND PROJECTS

FREE IEEE PAPER