I have collected a list of relevant Thesis (and possibly Doctoral dissertation) topics and titles for Master’s Students in ICT field of study. These Thesis topics come from varieties of sources including my own experience during preparation of Thesis and Special Study. Feel free to add or comment your thesis or dissertation topics in ICT (Information and Communication Technologies) Field of Study. This article shall be updated with newer list of thesis topics for Master’s and Doctoral degrees in ICT as soon as I come across your suggestions and new happenings in the field.
- Cognitive Radios & Cooperative Relay Networks – performance evaluation
- Disaster Warning & Post-disaster Communications (this is really a vast topic)
- Applications of ICT in Sustainable Development (find any specific topic to work on)
- Ad-hoc network based Social Network and iMANET
- P2P Systems & Social Networks, Internet of Things ( Social Networks on contribution after thesis)
- Wireless Mesh/Sensor Network – energy harvesting, security, mobility, self-configuration (be precise on ay specific topic under WSN)
- 3D Internet and VirtualWorlds (Green ITS) – OpenEnergySim: an open source platform for exploring“Green ITS” (Intelligent Transport System) and eco-driving in the Unity3D virtual world (immersive drivingwith game wheel, traffic simulation, CO2 emission simulation, collaborative evaluation framework of ITS measures, modeling driver behavior, etc) http://www.prendingerlab.net/globallab
- Protocol design in Resource management and QoS control in wireless networks
- Networks of the Future and Cloud. Advisor prof. Djamal Zeghlache.
- Web 2.0 as leverage force to automate enterprise tacit interactions. Advisor prof. Walid Gaaloul/Bruno Defude.
SOA and Web 2.0 have coexisted as integration paradigms on the Web for some years now. SOA is typically applied in business contexts, while Web 2.0 is an informal approach for ad hoc integration. The two approaches reflect the IT/people duality: while people require well presented knowledge and implicitly coordinate their cooperation through data/knowledge communication, machines deal rather with structured data and explicitly coordinate their interactions.
This PhD proposal tries to reconcile these conflicting requirements by combining the complementary strengths of SOA (processing automation) and Web 2.0 (user centric practicality) in order to enable end users to cooperate and manage domain specific processes on the Web. Having a process driven approach and an enterprise centric context, we aim to deal with methods and tools for supporting organization tasks achievement.
- Semantic Workflow Mining. Advisor prof. Walid Gaaloul/Bruno Defude.
The term mining means different things to different people. In the IT field, database was among the first discipline to embrace this term through data mining. After a while, businesses felt that mining data is not enough for discovering the good and bad practices in how they do things in terms of executed processes, arisen exceptions, and deployed solutions. As a result, process-mining discipline came into play with new hopes for businesses. Nowadays different approaches and techniques promote the use of process mining so that decision makers are in a better position to know who did what, when was it done, how did it end-up, and where did it happen. Logs that keep track of all executed activities are the primary source of process mining. These activities are usually part of workflows upon which different applications like Customer Relationship Management~(CRM) and Enterprise Resource Planning~(ERP) are being built.
A workflow can be abstracted through three complementary and interacting models: organizational, informational, and process. The first model identifies the actors of a workflow and the activities these actors can carry out. The second model structures the documents and data that a workflow’s processes manage. Finally the process model defines and coordinates the activities that form processes. Despite the value-add of logs to workflow mining with focus on process discovery, several studies have proven the limitations of these logs when time comes to identify a workflow’s informational and organizational models. Both models are critical as they permit to detect what is going on in a business in terms of organizational structure and information flow.
To address logs’ limitations, one step consists of enriching logs with semantical details that will be drawn from the interactions that take place during workflow performance. The semantic enrichment of logs should help identify the organizational and informational models of a workflow. On the one hand, the organizational model would show the types of relationships (e.g. delegation, cooperation, conflict) that govern the interactions between actors in an organization or between actors from different organizations (such as hierarchy, coalition, market, sub-contracting, etc.). These relations could be static, dynamic, temporary, spontaneous, etc. On the other hand, the informational model would show the different data flows that interactions carry whether these flows were official or unofficial. Using of agent technology we aim at providing a new view on workflow mining. This technology handles and provides representation for organizational concepts, such as groups, roles, commitments, an also organizational structures, -such as federation, hierarchy or market-, inherited from “Organizational Theory”. All these concepts are useful to structure, rule or at least understand at a macro level the coordination patterns of the different partners involved in a workflow or an Inter-Organizational Workflow (IOW).
PhD Research Topics – CSIRO ICT Centre, Canberra:
1. Real-Time Stream Processing using GPUs
Over the last decade, data analysis has become central to important scientific discoveries. In almost every field of science ranging from astronomy to marine biology, colossal amounts of data are being generated in very short periods of time. Modern computing resources are capable of efficiently collecting and storing vast data streams. However, the scalable and distributed analysis of such large data sets in real time poses compelling research challenges. In addition, application service providers performing real time analysis of streaming data attempt to abolish the existence of erroneous data that cause uncertainty and erroneous judgments. Specifically, there exists a distinct requirement for intermediaries that easily map exhaustive computations on large data sets to heterogeneous and distributed computing resources by ensuring data quality and reducing errors to deliver better quality of service. As a consequence, there is an increasing need for innovative software solutions, new hardware architecture and methodologies that harvest emerging distributed computing resources in an efficient manner. This PhD project will develop a novel set of architecture, algorithms and tools to perform real-time data stream processing, using a variant of MapReduce Programming model, running on GPUs.
2. High-Throughput/High-Content Analysis in Bio-Science Applications
The Transformational Biology Platform within the Commonwealth Scientific and Industrial Research Organization (CSIRO), Australia is supporting activities that aim to accelerate the analysis of biological specimens. One application in view is to automatically identify protein crystals from a time series of 2D images of slowly evaporating solutions. According to the current operational flow, once images are acquired from physical specimen, downstream processing is performed for feature extraction and further analysis such as classification, clustering and ranking are performed. While the processing phase in the operational pipeline is readily parallelizable, care and planning need to be taken to realize the potential for computational speedup. This PhD project will create approaches for efficient image data management and parallelize execution (in the form of workflows) that can be used in the development of bioscience image analysis pipelines. Two potential computational biology applications for this project are: i) high-throughput assessment of protein crystallization experiments and ii) pollen image analysis.
3. Integration of Body Sensor Networks and Cloud Computing
A specific class of wireless sensor networks is Body Sensor Networks (BSNs) that represent an emerging platform for many human-centered applications, spanning from medical to gaming and social networking. This area is particularly dense of interest because foreseen real-world applications of BSNs aim to improve the quality of life by enabling continuous and real-time non-invasive medical assistance at low cost. In a common health care scenario, assisted livings are monitored by BSNs able to gather data streams to be not only processed in real-time but also stored in remote medical data repositories. This implies a huge amount of data to be transmitted, stored and analyzed. In such contexts, management of a large number of isolated BSNs as well as cooperative BSNs to support applications for e-Health, e-Society, etc will be an important issue to deal with. Moreover, both on-line and off-line analysis of data coming from such networks should be supported by efficient infrastructures for processing and storing data such as Clouds. This PhD project will conduct design, architecture development, analysis and development of applications for the management of processes and data in large-scale systems based on wearable sensors, optimization of sensor data streaming, and use of innovative Cloud-oriented paradigms. This work will be conducted in collaboration with Prof. Giancarlo Fortino from University of Calabria, Italy.
4. Dynamic and Scalable Replication for Content Delivery Networks
Today’s Internet content providers primarily use Content Delivery Networks (CDNs) to deliver content to end-users with the aim to enhance their Web access experience . CDNs typically host third-party content including static content, streaming media, user-generated videos, and varying content services. With the rapid growth of Peer-to-Peer (P2P) techniques and the improvement of digital content production and retrieval tools, the CDN industry, i.e. content delivery, consumption and monetization, has been undergoing rapid changes. The multi-dimensional surge in content delivery from end-users has led to an explosion of dynamic (and uncachable) content, such as such as user-generated clips, VoD libraries, and IPTV services; new content formats as well as an exponential increase in the size and complexity of the digital content supply chain. The effectiveness of existing caching mechanisms is reduced due to this dynamic content, high frequency of content refresh, and
the presence of personal user information. It is further complicated by the convergence of Web 2.0 with user mobility that disrupts access locality due to user migration among edge nodes. Hence, innovative solutions integrating replication and caching are required for the management and delivery of the dynamic and personalized content. This PhD project will develop a novel suite of architecture, algorithms and practical tools to allow dynamic, scalable and efficient content replication and delivery mechanisms in CDNs.
5. Manipulating the Linked Open Data cloud
Linked Data technologies have rapidly gained momentum in the last years as the primary means for publishing data on the Web. The uptake of Linked Data technologies has lead to the extension of the Web with a public Linked Data space containing more than 6.7 billion pieces of information, making it by far the largest source of multi-domain, real-world data currently in existence. This data space is increasingly being used for both research efforts and real-world applications. As a result of its explosive growth, several problems have emerged within the Linked Data space, including the lack of well documented tools for publishing, editing and consuming the data. Currently, when publishing Linked Data one needs to first define a webpage in HTML and then uses RDF to annotate the data with semantic concepts. If RDF is used to present data, one needs to find a way to bind the data to the presentation. This Phd project will extend an RDF based User Interface language that in a similar fashion to XForms separates the purpose (data publishing) of a Web application from its presentation. The student will investigate how this language can be used in combination with business process moedlling languages to develop user interfaces that can manipulate data in the linked open data cloud. The end user of these user interface should be unaware of the semantic Web language underlying this approach.
6. Distributed BPEL processes in the cloud
The Business Process Execution Language or “WS-BPEL4WS” is the standard process model for process execution in the Business process modelling space. As more and more businesses deploy parts of their business processes in the cloud, traditional execution models of BPEL processes need to be changed. Instances of the same process might run on different machines in the cloud and even the location where the data resides for a given process instance might be unknown to the execution engine at run time. In this PhD project the student will investigate which parts of the BPEL model and its execution are affected when a process is deployed on multiple machines in the cloud and how to mitigate the issues by introducing language constructs and an execution modelled suited for distributed Cloud processes.
7. Social Network for services
Human powered (aka crowd-sourcing) systems gave promising solutions to problems that were unsolved for years. Crowd sourcing is used in social networks that contain large datasets about people’s skills and work experience to match them with companies who are looking for a specific set of skills and expertise. Employers can directly address prospective candidates via the community (a “crowd”), through an open call. Crowd-sourcing leverages mass collaboration enabled by Web 2.0 technologies. An example of such a social network for experts is LinkedIn, which contains an immense knowledge base created by it’s users. However, for job providers it is not trivial to find the one expert that has the skills and expertise required and is also available. The goal of this PhD project is to radically leverage, and improve the utility of future Internet services by merging context-aware technologies, semantic analysis and advanced data modeling for the needs of workers, professionals and businesses. In contrast to LinkedIn, the project aims at pursuing a federated and decentralized social networking ecosystem provided that can be shared by professionals and companies. The project will push the state of the art in information and expertise management in social network by introducing a new social network paradigm that builds a Semantic Web-enabled “expertise-enabled” social networking infrastructure on top of a decentralized and federated Social Web architecture. That way, professional capabilities and profile information will be modeled using Semantic Web technologies and Linked Data principles, enabling interoperability with other professional applications inside and outside the enterprise. Instead of the existing “friend” and “acquaintance” model, the project aims to provide “expertise-centered” social networks. In particular, the social network layer proposed will be suitable for: federated deployment, across existing silos, and effective social information processing and discovery, both by professionals or end users.
ICT 2014 Research Topics
Some of these activities and research programs are identified in the ICT-LEIT part of the work programme of Horizon 2020: The EU Framework Programme for Research and Innovation. Detail information on ICT Master’s, Doctoral and Independent research available on http://ec.europa.eu/research/participants/data/ref/h2020/wp/2014_2015/main/h2020-wp1415-leit-ict_en.pdf
- A new generation of components and systems
- Advanced Computing
- Future Internet
- Content technologies and information management
- Micro- and nano-electronic technologies, Photonics
ICT 1 – 2014: Smart Cyber-Physical Systems
Specific Challenge: Cyber-Physical Systems (CPS) refer to next generation embedded ICT systems that are interconnected and collaborating including through the Internet of things, and providing citizens and businesses with a wide range of innovative applications and services. These are the ICT systems increasingly embedded in all types of artefacts making “smarter”, more intelligent, more energy-efficient and more comfortable our transport systems, cars, factories, hospitals, offices, homes, cities and personal devices. Focus is on both reinforcing European industrial strengths as well as exploring new markets.
Often endowed with control, monitoring and data gathering functions, CPS need to comply with essential requirements like safety, privacy, security and near-zero power consumption as well as size, usability and adaptability constraints. To maximise impact and return on investment in this field, the following challenges must be addressed:
- De-verticalising technology solutions with CPS platforms that cut across the barriers between application sectors including mass consumer markets.
- Bringing together actors along the value chain from suppliers of components and customised computing systems to system integrators and end users.
- Creating new ICT Platforms for both vertical and core markets from automotive, health , smart buildings and energy to wireless communications and digital consumer products and services.
ICT 2 – 2014: Smart System Integration
Specific Challenge: The aims are to develop the next generations2 of smart systems technologies and solutions, based on systemic miniaturisation and integration, of heterogeneous technologies, functions and materials, and to establish European competitive ecosystems for the design, R&D, prototyping and testing, manufacturing and industrialisation of smaller, smarter (predictive, reactive and cognitive) and energy autonomous Smart Systems. These ecosystems will provide services for cost efficient access to European manufacturing capabilities and expertise, including training, design and pilot line production and testing, in particular for new users of Smart Systems.
This specific challenge contributes to the strategy of micro and nano electronics KET in the area of More than Moore and complements the activities of topic ICT25.
Scope: The focus is on:
a. Research & Innovation Actions for one or both of the following:
- To advance the state of the art of heterogeneous integration of micro and nanotechnologies (nanoelectronics, micro- electro-mechanic, magnetic, photonic, micro-fluidic, electrochemical, acoustic, bio/chemical principles and microwave technologies) into smart systems.
Work will be driven by industrial requirements and specifically target multi- disciplinary R&D in the following areas:
– Miniaturised systems based on high density 3-dimensional heterogeneous integration.
– Autonomous deployable smart systems that include efficient energy management (Zero Power technologies) and energy harvesting from their operating environment,
– Advanced Smart systems with multi-functional properties, including sensing, storing, processing, actuation and ultra-wideband communication.
Actions may address performance, design and testing, but the focus will be on the integration into systems, including manufacturability and packaging.
- A Predictive modeling by multilevel networks of wearable sensors and cell phone data
- Data distillation from Social Big Data stream
- Advanced SMT Techniques for Word-level Formal Verification – (WOLF)
- Remote sensing techniques for digital Earth applications
You might be interested in ICT Doctoral (PhD) Research Topics from UniMore at http://www.ict.unimore.it/index.php/for-students/research-topics
Finding (useful) research on ICT use in education in developing countries ~ A World Bank Blog on ICT use in Education http:/blogs.worldbank.org/edutech/finding-useful-research-on-ict-use-education-in-developing-countries