Velg hva du ønkser å vise prosjekt for.
This project explores how Artificial Intelligence (AI) can enhance game mechanics, player engagement, and social interaction in BitPet, a location-based Augmented Reality (AR) game designed to promote physical activity. The game, inspired by Tamagotchi, Pokémon GO, Animal Crossing, and Pokémon Snap, has been in development since 2020 and is set for a soft launch in Summer 2025.
[ Vis hele beskrivelsen ]
This project offers an opportunity to work with cutting-edge AI technology while gaining hands-on experience in game development. Developers from BitPet will provide technical support throughout the process.
Project Scope:
The front end will be developed in Unity. This project is designed for two students, and experience with Unity development is beneficial.
[ Skjul beskrivelse ]
This project aims to investigate the physical and social health effects of playing BitPet, a location-based AR game designed to promote fun, physical activity, and social interaction. Inspired by Tamagotchi, Pokémon GO, Animal Crossing, and Pokémon Snap, BitPet has been in development since 2020 and is scheduled for a soft launch in Summer 2025.
Project Phases:
Literature Review – Study existing theories, games, and empirical research on the health effects of similar games.
Empirical Study Design – Develop a study to evaluate:
Data Collection & Analysis – Conduct questionnaires, interviews, observations, and analyze in-game data tracking.
Findings & Recommendations – Provide:
The project will be co-supervised by faculty from NTNU’s Faculty of Medicine and Health Sciences and is suited for one or two students.
This project aims to develop game mechanics that will motivate users to socialize and be physically active using Augmented Reality. It is part of the BitPet project, which aims for commercialization. Developers in BitPet will provide technical support.
The project will involve a study of existing theory, game concepts, and technology, the design and development of a game concept (both front-end and back-end), and an evaluation of the concept involving real users.
The front end will be developed in Unity.
This project requires two students.
This project aims to design and develop innovative game concepts that integrate an exercise bike as a game controller, complementing traditional button inputs. In addition to button controls, players should use pedaling as a core mechanic to interact with the game. The goal is to create an engaging experience that remains enjoyable over time while promoting physical activity.
The game will be developed in Unity.
This project is designed for a team of two students, requiring prior experience with Unity.
The goal of this project is to design and develop new game concepts for a game where an exercise bike is used as a game controller in addition to traditional game input through multiple buttons. In addition to input from buttons, the player should control the game by using her/his fit to move the pedals. The goal of the game is to have fun that can last over time and get physical exercise. The game should be implemented in Unity using an API provided for the exercise bike controller.
The goals of this project are:
Research existing exergames and games that could fit this purposeDesign and implement a prototype game Provide input on the API used for the exergame frameworkEvaluate the game through user experiments
This project is for a group of two students, and experience with Unity is required.
In this project, the goal is to develop new game concepts and technologies for exergames - games where the player performs physical exercise. There are several approaches to exergames, and the challenge is to find the balance between something that is fun to play and getting real physical exercise from playing the game.
The first phase of the project will consist of a theoretical study of exergames and mechanisms for using games as motivators. The second phase focuses on implementing a prototype using various technologies. In the third and final phase, the prototype will be evaluated and tested.
This project requires a group of two students.
This project aims to develop innovative game concepts and technologies for exergames—games that incorporate physical exercise as a core mechanic. A key challenge is balancing engaging gameplay with meaningful physical activity to create a fun and effective experience.
The project will be carried out in three phases:
This project requires a team of two students.
This project aims to design, implement, and evaluate a multi-player learning game where students work together or compete to complete challenges while simultaneously acquiring knowledge. The game must strike a balance between engagement and education, ensuring it remains both enjoyable and effective as a learning tool. The students are encouraged to use AI a key component to improve the gameplay and the experience.
The project will involve reviewing research on game-based learning, games, and use of AI in games, developing and implementing a game concept, and conducting user evaluations to assess its effectiveness.
This project is designed for a team of two students.
Background
This thesis focuses on classifying an object through minimal tactile exploration. That is to infer the shape or class of an object based on a sequence of touches by a tactile sensor. The goal is to develop a next best touch (NBT) strategy that guides the exploration process by selecting the most informative touch points to maximize shape inference accuracy. Instead of random exploration, the system will leverage learned priors to make data-driven decisions about the optimal place to touch next.
The classification task involves distinguishing between a fixed set of objects, where the system must infer which object is being touched/grasped after a fixed number of touches (e.g., five). A cost-function-based approach will be implemented to optimize the NBT strategy, with the potential integration of learned heuristics (pure learning-based approaches are also acceptable) to refine the decision-making process. The final model should be capable of efficiently reconstructing object geometry and making accurate classifications with minimal interaction.
This research has applications in robotic tactile perception, particularly in scenarios where visual sensing is limited or unreliable, such as identifying objects in cluttered or occluded environments. The MSc thesis entails signing an agreement with SINTEF Ocean.
Proposed approach
The proposed approach involves collecting tactile data using a GelSight sensor mounted on a robotic arm while interacting with various objects. This dataset will be used to train a classifier model capable of inferring object identity based on a sequence of sensor readings. The system will then learn to identify the most informative next touch (NBT) that maximizes classification accuracy.
For validation, the trained model will be tested on a fixed set of relevant objects with different shapes, where the system must correctly identify which object is being grasped using a limited number of tactile interactions. The classifier’s performance will be evaluated based on its efficiency in making accurate inferences with minimal touches.
Requirements
Supervisors: Theoharis Theoharis (NTNU), Ekrem Misimi, Sverre Herland (SINTEF Ocean)
Reference
https://ieeexplore.ieee.org/document/10801324
More healthcare applications are using AI. Many of these applications are categorized as high-risk. Thus, it is essential to educate stakeholders in the healthcare domain to understand the EU AI Act. However, the EU AI Act is a complex regulation that is hard to follow. This project aims to study how to design and develop an educational game to teach the EU AI Act. The project will apply the design science research method and invite healthcare stakeholders to pilot and evaluate the game. The expected results are the prototype of the game and the methodology to design such a game.
The project will be co-supervised by Prof. Øystein Nytrø and Prof. Alf Inge Wang.
The rise of video game streaming platforms like YouTube and Twitch has led to an explosion of gaming-related video content. However, categorising and analysing this vast content manually is impractical. This project proposes the development of an automated system that uses computer vision and NLP techniques to identify, classify, and categorize video game content in streaming videos.
This project focuses on developing an ML-based system capable of segmenting and classifying gaming videos into different emotional and thematic categories. The goal is to automatically assign content/narrative percentages to various content types present in a live streamed gaming video — for instance, identifying that a stream consists of 10% exploration, 30% combat, 20% high-tension moments, and 40% calm narrative or idle periods etc. This goes beyond traditional object or HUD detection, aiming instead to capture the mood, pacing, and thematic shifts within gaming content.
Objectives:
Data collection: gather a large dataset of gaming video clips from YouTube and Twitch across multiple popular video games and genres.
Feature extraction: develop a deep learning model to identify sequential content through feature extraction (check the following references).
Employ audio/NLP analysis to gauge the emotional tone of each segment.
Multimodal model: integrate audio-visual fusion models to combine insights from both modalities for richer understanding.
Temporal analysis: implement sequence models (e.g., LSTMs or Transformers) to ensure smooth and coherent classification across video timelines.
Content summarisation: aggregate segment classifications to generate an overall content breakdown (e.g. 10% exploration, 30% combat, 20% high-tension moments, and 40% calm narrative or idle periods.)
Evaluation: evaluate model performance using standard metrics like accuracy, precision, recall, and F1-score, and test generalisation on unseen games or streams.
Conduct qualitative evaluations comparing automated summaries to human annotations.
Technical Considerations:
Use video processing libraries to extract frames at regular intervals.
Develop semi-automated tools to assist in the manual annotation process.
Fine-tune pre-trained models, apply multi-label classification architectures, and use temporal models for sequence consistency.
Incorporate sequential models to capture temporal patterns across video sequences.
Use transfer learning to handle limited labelled data for niche games.
Expected Outcomes:
A recommender system based on identified video game narrative
Desired Candidate Skills:
Proficiency in Python, deep learning frameworks (TensorFlow/PyTorch).
Experience in computer vision and NLP/audio processing.
Interest in gaming and understanding of different gaming genres.
References:
Yeung, S., Russakovsky, O., Jin, N., Andriluka, M., Mori, G., & Fei-Fei, L. (2018). Every moment counts: Dense detailed labeling of actions in complex videos. International Journal of Computer Vision, 126, 375-389.
Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., & Zhang, L. (2018). Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6077-6086).
Yu, H., Wang, J., Huang, Z., Yang, Y., & Xu, W. (2016). Video paragraph captioning using hierarchical recurrent neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4584-4593).
Schwenzow, J., Hartmann, J., Schikowsky, A., & Heitmann, M. (2021). Understanding videos at scale: How to extract insights for business research. Journal of Business Research, 123, 367-379.
Haroon, M., Wojcieszak, M., Chhabra, A., Liu, X., Mohapatra, P., & Shafiq, Z. (2023). Auditing YouTube’s recommendation system for ideologically congenial, extreme, and problematic recommendations. Proceedings of the national academy of sciences, 120(50), e2213020120.
Yakaew, A., Dailey, M. N., & Racharak, T. (2021, February). Multimodal Sentiment Analysis on Video Streams using Lightweight Deep Neural Networks. In ICPRAM (pp. 442-451).
Karjee, J., Kakwani, K. R., Anand, K., & Naik, P. (2024, January). Lightweight Multimodal Fusion Computing Model for Emotional Streaming in Edge Platform. In 2024 IEEE 21st Consumer Communications & Networking Conference (CCNC) (pp. 419-424). IEEE.
Lightweight Models for Emotional Analysis in Video. https://arxiv.org/abs/2503.10530
Introduction
Amidst a burgeoning aging populace and surging demand for homecare (hjemmetjenester) services, there emerges an urgent call for innovation and optimization in service delivery. Decentralized technology presents distinct advantages, such as user-centric data ownership, accessibility, heightened privacy, security, and transparency. These attributes hold the potential to significantly enhance the efficiency and efficacy of municipal homecare services. The proposal aims to explore how decentralized technology could transform and enhance homecare services administered by Trondheim municipality, heralding a paradigmatic shift in service provision.
Central to the project's focus is the introduction of a decentralized, user-owned health wallet platform to tackle the myriad challenges facing contemporary homecare services. This pioneering platform empowers individuals to assume control over their health data, enabling secure storage, management, and sharing of medical information according to their preferences. Against the backdrop of mounting concerns surrounding privacy breaches and data mishandling, this initiative offers a compelling alternative by reinstating ownership and oversight of sensitive health data to patients, thereby fortifying security and privacy protocols. Moreover, it promises to foster transparency and bolster patient autonomy, fostering active participation in their healthcare journey. Note that this isn't just about data – it's also about empowerment of patients!
Furthermore, the envisioned platform holds the potential to revolutionize not only homecare services but also broader healthcare provision by facilitating seamless data exchange among patients, healthcare providers, and other stakeholders. Such enhanced connectivity ultimately promises improved outcomes and a more patient-centric healthcare ethos.
Tentative tasks
In an era marked by increasing digital transactions and online interactions, ensuring the security and integrity of personal identities has become paramount. Traditional methods of identity verification, such as passwords and biometrics, are often susceptible to fraud and exploitation. However, the integration of Artificial Intelligence (AI) offers a promising avenue for strengthening identity security measures. By harnessing AI algorithms for identity verification, organizations can enhance accuracy, efficiency, and resilience against fraudulent activities. This proposal seeks to explore the implementation of AI-driven identity security systems to fortify the protection of individuals' personal information and prevent identity theft.
Tentative Tasks:
Along with the development of Large Language Models (LLMs), many studies try to use LLMs to identify and fix the security issues in software code. The limitation of using one LLM for such a purpose is that it covers only limited aspects of software security. Using Agentic AI (i.e., an integration of multiple LLM-based agents and related tools) can cover more aspects of software security in an integrated application or framework. This project aims to improve the research and practice of using Agentic AI for improving software security. The tasks will include:
Response technology (response systems) allows teachers to ask questions to large groups of students and get aggregated and useful answers to guide the lecture. Most of the existing systems require preparing the questions in advance and offer little to no flexibility in asking ad hoc questions or even using the results from a question as the basis for a follow-up question.
The primary aim of this project is to design and implement an agile question generation approach that analyzes student open-text responses and produces contextually relevant follow-up questions during interactive lectures. While existing question-generation solutions focus on structured content, using open-text student responses for question generation in real-time remains challenging. Additionally, there is a lack of empirical evaluation of these systems in classroom environments.
With advances in artificial intelligence and natural language processing, automated question generation could be a promising technique for enhancing interactive learning environments. The effectiveness of the proposed solution could be evaluated through user studies, assessing its impact on student engagement, learning outcomes, and teaching adaptability. Teachers can dynamically adapt their teaching strategies by generating meaningful follow-up questions based on student responses, probing deeper into students' understanding, and fostering productive discussions.
While the project can be assigned to a single student, it is recommended that a pair of students work on it.
Many organizations struggle with the right way to do digital transformation. "Big bang" methods are often costly and bear high risks of failure too late in the process. Ideas from agile are gradually entering into organizational digital transformation as an alternative to big bang approaches.
In this task we are interested in learning more about what agile transformation means, and how it is practiced successfully by organizations. You will be expected to analyze existing research in agile transformation and digital transformation using qualitative literature review methods. Later on we will together choose a real-world case organization for a field study involving interviews and observations. You will generate new knowledge, and recommendations for how agile information should and should not be used by organizations who want to implement digital transformation.
This task requires that you have a good understanding of, and are interested, in empirical qualitative research. Working language for this task is Norwegian or English. The thesis can be written in Norwegian or English but we recommend English. Please contact Babak before you select this task.
The focus of this thesis is to develop an Artificial Intelligence based system to help the students learn mathematical concepts while playing educational games. One of the ways to provide help is to find out the difficult moments during the interaction and then supporting the students when they are faced with such moments. The challenging aspect of such projects is the “cold start problem”. We need to know in advance how to detect the difficult moments for individual students. Solving this problem will be a key aspect of this thesis
Thesis DescriptionIn a first step, the student(s) will design and implement the feedback tool using the wearable sensors. Afterwards, they will conduct a user study in order to test the usability of the system with a number of students. Once the usability of the system is established (with the last changes in the system), the student(s) will conduct a larger user study to evaluate the effectiveness of the system. Finally, the candidate(s) will analyse the collected data and write up his/her thesis.
RequirementsThe ideal candidate will have a background in system design and basic machine learning. Solid programming skills and an interest in hands-on development and experimentation is also a requirement.Programming skills: Python/Java.
The primary objective is developing and demonstrating an AI Assisted Modelling App, showing how AI could be used as an assistant for Modellers.
The workplaces produced will demonstrate AI collaboration and innovation principles and methodologies. AI Assisted Modelling implies intelligent user- and AI agent-driven balancing of properties, capabilities, qualities and services, reducing errors and change management, and cutting calendar times and costs by factors.
The secondary objectives are:
The web-based Modeling Platform has being implemented in an Equinor Accelerator project and will support the tasks to be performed. A Demonstrator of AI Assisted workplaces and capabilities extending the capabilities of the Mimiris Modelling Platform and recent digitisation approaches, such as Intelligent Agents and Digital Twins, will be implemented in demonstrators.
We will conduct this work with the Customer (Company): KAVCA AS.
(AI-SECRETT) The need for sustainable transitions requires all sectors to enhance their competencies and skills to achieve the social, economic and environmental transition. This could be facilitated with competencies related to AI and creativity. This project will focus on identifying the competencies related to AI and creativity that would help towards sustainable transitions and designing a solution that could help people enhance their skills and competences. The tasks will include:
The outcome could be a means of supporting the co-design of learning activities related to AI and creativity. This task is related to the European project AI-SECRETT.
The green shift is high on every executive’s agenda, and with good reason. The urgency of the climate crisis and associated transition to a sustainable society changes the way firms create, capture, and deliver value. Shifting the very fabric of today's business landscape. Firms must now deliver on a triple bottom line (environmental, social, and economic) and not only meet today's needs from customers and shareholders, but also future generations' needs and opportunities for value creation. A strategic response is required, and firms must make structural changes to accommodate a fully sustainable business model (SBM). Research suggests that firms that manage and mitigate their exposure to climate-change risks while seeking new opportunities for sustainable value creation will generate a competitive advantage over rivals in a carbon-constrained future. However, transitioning towards a SBM is challenging and companies often lack the necessary data and insight to make correct and effective business decision. Artificial Intelligence (AI) offers a possible solution by establishing a basis for data-driven and fact-based decision making. This makes it easier for firms to take a systems perspective, quantify impacts, and reduce the complexity of the sustainable transition. Although real and theorized examples of AI enabling SBMs exist, a comprehensive understanding of the relationship between AI and SBM is still missing, leaving a gap in our understanding of the underlying mechanisms and inhibiting firms’ ability to accelerate their sustainable transition. Thus, this project aims to take stock of current knowledge by studying the following research questions:
RQ1: What do we know about the relationship between AI and SBM? RQ1.1: How can companies leverage AI for SBM? RQ1.2: How can the relationship between AI, SBM and competitive performance be conceptualized?
The rapid integration of artificial intelligence (AI) into everyday life is reshaping social practices and technological infrastructures. Women remain underrepresented in the development and use of AI, enabling systems to reproduce existing gender biases. Yet AI also holds potential for empowerment by strengthening digital confidence, competence, and agency. Investigating how AI can foster more inclusive and equitable technological futures is therefore essential.
Several projects and master theses have developed interventions for Women in Computer science and AI specifically.
This master's thesis will build on an intervention that explores how an AI hackathon can be intentionally designed to empower women in AI. The focus in this intervention must be on women who do not necessarily study computer science or work in IT with degrees or experience in any subject area. The results will be a set of actionable design principles for developing a hackathon and prototypes (developed by the hackathon participants) that will demonstrate those principles in practice.
In this master thesis, the hackathon will be evaluated and re-engineered.
The research question is: How can a hackathon be designed and implemented to facilitate the empowerment of women in AI?
In the preliminary phase, the student will run a rapid literature review to examine existing research on the intersection AI, hackathon, women, empowerment. Then, the study will investigate technological solutions to build the hackathon. Technology solution ideas and hackathon principles will be generated through rounds of participatory co-design workshops. The master student will have to recruit participants to the hackathon and run the hackathon in cooperation with the SBS group.
Relevant sources:
O. T. Aduragba, J. Yu, A. I. Cristea, M. Hardey and S. Black, "Digital Inclusion in Nothern England: Training Women from Underrepresented Communities in Tech: A Data Analytics Case Study," 2020 15th International Conference on Computer Science & Education (ICCSE), Delft, Netherlands, 2020, pp. 162-168, doi: 10.1109/ICCSE49874.2020.9201693.
Paganini, Lavinia, and Kiev Gama. "Female participation in hackathons: A case study about gender issues in application development marathons." IEEE Revista Iberoamericana de Tecnologias del Aprendizaje 15.4 (2020): 326-335.
Gama, Kiev, et al. "Hackathons as inclusive spaces for prototyping software in open social innovation with NGOs." 2023 IEEE/ACM 45th International Conference on Software Engineering: Software Engineering in Society (ICSE-SEIS). IEEE, 2023.
R. Prado, W. Mendes, K. S. Gama and G. Pinto, "How Trans-Inclusive Are Hackathons?," in IEEE Software, vol. 38, no. 2, pp. 26-31, March-April 2021, doi: 10.1109/MS.2020.3044205.
Paganini, Lavínia, et al. "Opportunities and constraints of women-focused online hackathons." 2023 IEEE/ACM 4th Workshop on Gender Equity, Diversity, and Inclusion in Software Engineering (GEICSE). IEEE, 2023.
Open text questions allow students to answer without being influenced by predefined options and thus eliminating some causes for bias and guessing.
The primary aim of this project is to develop an intelligent solution that allows a teacher to ask a knowledge related open-text question and get an aggregated overview that indicates with a certain level of confidence what percentage of students got it right, partially right, partially wrong, wrong etc. This will allow the teacher to offer adaptive feedback in order to clarify any misunderstanding. The project could be designed to provide a real-time dashboard for teachers, offering insights into student responses and highlighting common misconceptions that may require further explanation, identifying knowledge gaps, and adjusting lectures dynamically.
The implementation of natural language processing and artificial intelligence advancements can assess responses based on correctness, relevance, coherence, and depth of understanding. The dataset collected from student responses during lectures is expected to vary from a few tens to a few to hundreds of responses per question. The proposed solution’s effectiveness can be tested in real lecture environments, ensuring that it meets the needs of teachers and students. Ultimately, this research will contribute to advancing AI-driven education technologies, demonstrating how automation and intelligent feedback mechanisms can enhance teaching effectiveness and student learning experiences.
The focus of the thesis is to improve and test an existing intelligent feedback system that helps the students while they are programming. This help should be provided in real-time using the eye-tracking data from the student and the log data from the IDE that the student is using. The challenge is to develop a system that is both effective and efficient in helping the students when they are facing difficulties in programming medium-size software.
Thesis DescriptionIn a first step, the student(s) will design and implement the gaze-aware feedback tool. Afterwards, they will conduct a small user study in order to test the usability of the system with a small number of students. Once the usability of the system is established (with the last changes in the system), the student(s) will conduct a larger user study to evaluate the effectiveness of the system. Finally, the candidate(s) will analyse the collected data and write up his/her thesis.
RequirementsThe ideal candidate will have a background in basic machine learning and system design. Solid programming skills and an interest in hands-on development and experimentation is also a requirement.Programming skills: Python.
The intersection of artificial intelligence (AI) and healthcare presents an opportunity to enhance medical diagnostics, improve patient outcomes, and streamline the work of healthcare professionals. This master project proposal invites students to contribute to this transformative field by developing an innovative web application. This application will leverage AI technologies to analyze medical data, offering insights ranging from data visualizations to complex diagnostics. The project integrates three key areas: scintigraphy image analysis, blood analysis data, and anamnesis analysis. However, students can choose the direction of their studies and focus on the area(s) of their interest.
The intersection of artificial intelligence (AI) and healthcare presents an opportunity to enhance medical diagnostics, improve patient outcomes, and streamline clinical workflows. In previous years, this project has focused on developing an AI-driven web application capable of analyzing medical data to support decision-making for doctors and medical students.
This year's thesis will focus on further developing the system by improving existing functionalities, expanding diagnostic capabilities, and refining user experience. Additionally, the project will explore new machine learning models to enhance accuracy and reliability in medical diagnostics.
Students will be expected to extend the existing system and contribute to one or more of the following areas:
By participating in this project, students will continue the development of the AI-driven assistant, refine existing modules, and test improvements through a structured user study.
Project Objective
The primary goal of this project is to design and implement a web-based AI-driven diagnostic assistant that aids doctors in creating accurate diagnoses and helps medical students sharpen their digital skills. This assistant will harness the power of image processing, quantitative data analysis, and natural language processing (NLP) to analyse medical data comprehensively.
The ideal candidates should have:
Recommended technical skills:
Expected Project Work Packages
RBK (fotball analytics):
Video presentation
Presentation with sound-clip embedded in each slide (download and listen)
St. Olav / NTNU Med. fak. (medical image analysis):
Abdominal Aortic Aneurysms (AAA): Is Minimal invasive or open surgery best for a given patient. (video)
Abdominal Aortic Aneurysms (AAA): Is Minimal invasive or open surgery best for a given patient. (text)
Brain segmentation from high-res MR images
Kartverket:
Video (for the three projects below)
Bruk av kunstig intelligens til klassifisering av laser punktsky fra flybåren datafangst
Beregning av DOP-verdier for å predikere GNSS målekvalitet
Strømming av punktsky fra database til web viewer
NINA:
Monitoring Norwegian nature loss with satellite-based earth observation and AI (Video)
Monitoring Norwegian nature loss with satellite-based earth observation and AI (Text)
MIA Health:
MIA (Monitorering-Innsikt-Aksjoner): Dine data. Din digitale tvilling. Din helse (video)
MIA (Monitorering-Innsikt-Aksjoner): Dine data. Din digitale tvilling. Din helse (tekst)
Catchwise:
Catchwise: Predicting where to find fish for commercial fishing boats
Maritime Robotics:
Dense Monocular Depth Estimation for Unmanned Surface Vessels
This thesis uses the combination of AI and biometrics (eye-tracking, EEG, Facial expression) to understand processes underlying successful extreme programming (pair programming, test-driven development, continuous integration, refactoring) scenarios. This understanding can help us develop innovative solutions for the
The ideal candidate will have a background in system design and basic machine learning. Solid programming skills and an interest in hands-on development and experimentation is also a requirement.
Programming skills: Python/Java.
Teachers make rapid and complex decisions while managing classrooms, responding to students, and delivering instruction. Understanding these cognitive processes is crucial for improving teacher training, classroom strategies, and AI-driven educational tools. Eye-tracking technology, combined with Artificial Intelligence (AI), offers a powerful approach to analyzing how teachers allocate visual attention and make instructional decisions in real time. This thesis aims to explore how AI-enhanced eye-tracking can be used to study teacher behavior, cognitive load, and decision-making patterns in educational settings. By leveraging AI to process and analyze eye-tracking data, the research seeks to uncover insights that can improve teacher training and optimize classroom dynamics. By integrating AI and eye-tracking, this study will provide valuable insights into teacher cognition and instructional decision-making. The findings could pave the way for more adaptive AI systems that support educators in real-time.
An immune system approach to fake news classification is currently under development . It is an exciting new approach to Fake News classification, drawing inspiration from antibody and antigen concepts from nature. This project seeks to extend and refine the current approach in various ways. The student(s) may choose their own path.
Ongoing relevant projects include:
As decentralized technologies mature, modern platforms increasingly combine Web3 components (blockchain, smart contracts, decentralized storage) with traditional Web2 services (APIs, databases, authentication layers). Self-Sovereign Identity (SSI) adds another architectural layer, enabling secure, user-controlled identity management through verifiable credentials and decentralized identifiers (DIDs). Validating such hybrid architectures early is essential for ensuring interoperability, performance, and security. This project explores the use of Simulink and System Composer for modeling and validating the architecture of a Web2–Web3–SSI integrated system.
Objectives
Model a hybrid architecture combining Web2 services, a Web3 backend (e.g., blockchain nodes or smart contracts), and an SSI identity layer.
Identify key architectural concerns: authentication latency, credential issuance/verification flows, cross-system data integrity, and fault tolerance across distributed components.
Use Simulink/System Composer to simulate message flows, network delays, identity interactions, and multi-layer service dependencies.
Validate interoperability and detect architectural risks prior to implementation.
Methodology
Select a representative hybrid use case (e.g., Web2 app using SSI for login and a blockchain for audit logging or credential storage).
Build subsystem models representing Web2 backend, SSI agent/wallet, verifiable credential issuer/verifier, blockchain nodes, and communication channels.
Simulate scenarios such as credential verification under load, node failures, network latency spikes, and inconsistent DID resolution.
Analyze and verify the architecture models against - requirement coverage, functional correctness, quality and performance attributes, requlatory complience, etc.
Expected Outcomes
A Simulink/System Composer architecture that visualizes interactions between Web2 components, Web3 infrastructure, and SSI identity flows.
Simulation and validation results identifying performance bottlenecks, failure points, and integration challenges.
Architectural insights and recommendations for designing robust hybrid Web2–Web3–SSI systems.
Reliability is defined as the ability of a system to provide continuous correct service. Faults and attacks may affect the reliability of a system, and may have a larger or smaller impact depending on the architecture of the system. Various methods exist for reliability evaluation of systems, for example Fault Trees. We want to investigate how those methods can be applied to machine learning pipelines. This would allow comparing different architectures based on their tolerance to faults.
Project Description
A model is an abstraction of a system that highlights important features of it for a certain purpose, while abstracting the details that are not relevant. Models are used in different ways in software and systems engineering (e.g., UML or SysML diagrams). Some kind of models are used for evaluating quality properties like reliability, safety, or security. As an example, Fault Trees (FT) [1] are a very common kind of model that is used to estimate the reliability of a system.
There has been a lot of work in the literature to design methods to automatically derive fault trees and other dependability [2] models from models of a system or software architecture [3] [4]. The idea is that, from the information documented in the architecture, a lot can be said about how faults can be generated and how errors propagate. A lot of manual modeling effort can therefore be saved. For example, if we know that two components are connected in a client-server pattern, we know that a failure of the server will affect (propagate to) the client.
While those techniques are now established for traditional systems, very few works have addressed reliability models for machine learning pipelines. The objective of this project is to understand what how existing techniques for traditional systems can be adapted to the machine learning context. The idea is to start from some diagram of the architecture of a machine learning pipeline, such as those that can be obtained with TensorBoard [5] or with Netron [6].
This work proposal involves:
The long-term research objective linked to this activity is to define methods that enable system architects to evaluate and compare machine learning architectures with respect to their ability to tolerate faults.
Needed skills
Mandatory
Useful to have (optional)
References
In such a thesis, we will focus on one of the various AI solutions based on the biometric sensors (eye-tracking, heart rate, EEG) to enable better learing experiences for students. We will also focus on collecting data from students such as eye-tracking, EEG, heart rate, skin temperature, and facial expressions. These data sources provide information about the students from the different points of view and combining the provide better predictions of students' behaviour and their performance.
Requirements: The ideal candidate will have a background in system design and basic machine learning. Solid programming skills and an interest in hands-on development and experimentation is also a requirement.Programming skills: Python/Java.
For details about the different options please contact kshitij.sharma@ntnu.no
The focus of this thesis is to use multiple sensors and artificial intelligence to predict the various performance measurements of pair programmers. Pair programmers usually produce better programming results than the individual programming therefore it is important to understand the factors that contribute to their success. With multiple data sources providing us with information from a diverse points of view, recent works have shown their advantages over individual data sources. In this thesis, the students will use eye-tracking, EEG, heart rate and facial expressions as the data sources.
In the first step, the students will gather data from multiple wearable and pervasive data sources while the participants are pair programming. In the second step the students will develop prediction algorithm using the features from the interaction of the two programmers. Finally, the students will compare the various algorithms and feature sets and write-up the thesis.
Requirements:The ideal candidate will have basic background in machine learning and deep learning algorithms. Programming skills: Python.
The last few years have seen an explosion in interest regarding the use of Artificial Intelligence and much talk about the potential business value. Nevertheless, there is significantly less talk about the challenge's organizations will face when implementing such solutions and how they should overcome these obstacles. Inhibiting factors are not only of a technological nature but also include organizational and human factors. This project will involve collecting and analyzing data in collaboration with the researchers from the Big Data Observatory (https://www.observatory.no).
Advanced forms of analytics and aritificlai intelligence are becoming increasingly deployed to support the work of healthcare workers. Medical doctors, nurses, and administrative staff either use, or are aided by sophisticated technologies which are posed to radically change the nature of their work. For example, radiologists now rely increasingly more on machine learning techniques to and other applications of AI to diagnose patients, while a lot of procedural and repeptive tasks are being done by machines. The objective of this project is to understand how the nature of work for health practitioners is changing, and what positive and negative consequences they experience.
This is a project in cooperation with Gintel.
This project aims to develop a pipeline for automated switchboard call handling using Speech-to-Text (STT), Text-to-Speech (TTS), and AI technologies. The context is automating inbound call traffic to a company. This typically includes:
This thesis uses Biometric data (heart rate, EEG, eye-tracking) to understand how the brain process visual conceptual models. Conceptual models are written in specific diagrammatic languages (two dimensional visual models) such as UML and BPMN. A lot of work has been done on the understanding of how humans comprehend and use such models in information systems and software development from the point of view of IT, cognitive psychology and linguistics. On the other hand, there are limited work on how the brain processes such models. Some work is done in neuro-lingustics, but primarily looking at natural language texts. Several areas of the IT-field has also used techniques from neuro-science for a while (NeuroIS where one look e.g. on the usage of IT systems and appropriate user interfaces, and NeuroSE where one in particular look on comprehension of software code)
The task consist in establishing an overview of current work in neuro-science, neuro-linguistics and IT relevant for understanding how the brain process conceptual models as part of model comprehension, and develop and conduct experiments to investigate aspects of this, including how visual conceptual models are processed under comprehension, if there are individual differences in the comprehension of models based on personal characteristics and if there are certain ways of modeling that are more appropriate than others seen from the point of view of human processing. The report is expected to be written in English, and a good master thesis will be able to contain material for a scientific publication.
Vi ansetter også forskningsassistenter inn mot tematikken . Oppgaven gjøres i samarbeid med Kshitij Sharma ved IDI og andre samarbeidspartnere ved NTNU og internasjonalt
Over the past decades, there has been significant progress in digital accessibility, driven by better tools, stronger governance, and increased awareness. However, shifting economic priorities and limited understanding of accessibility concepts threaten to stall this progress. For many developers, accessibility remains a vague and complex area — a checklist of standards without a clear sense of how to meet them or why they matter.Accessibility is a broad field, encompassing diverse types of impairments — from visual and auditory impairments to motor limitations and cognitive challenges. Importantly, these impairments may be permanent, temporary (e.g., an injury), or situational (e.g., a noisy environment or glare on a screen). By designing with accessibility in mind, we improve digital experiences not just for those with disabilities, but for everyone.This project examines how interactive and educational games can be utilized to promote empathy and understanding of accessibility challenges. The idea is to simulate different impairments through playable web-based scenarios that highlight common accessibility failures. Players will experience the frustrations faced by users with impairments and then be guided through the process of improving the design, seeing firsthand how the same content becomes more usable and inclusive.Possible contributions of the project include:- Designing and implementing an accessibility-focused learning game- Simulating impairments such as blindness, color blindness, dyslexia, or motor impairments- Demonstrating common accessibility barriers in websites and showing how to fix them- Evaluating learning outcomes or user experiences with a prototypeThis project is ideal for students interested in inclusive design, human-computer interaction, educational technology, or web development. It offers a chance to combine technical work with a meaningful social mission.
Co-supervisor: Dag Frode Solberg
Bakgrunn og motivasjon: For å møte klimautfordringer må Europeiske selskap integrere bærekraft i alle aspekter av forretningsdriften. Med innføringen av det nye Bærekraftsdirektivet fra EU (Corporate Sustainability Reporting Directive, CSRD), som trer i kraft i januar 2025, stilles det strengere krav til bedrifters rapportering av bærekraftsdata. Dette inkluderer detaljert rapportering av CO-utslipp, spesielt såkalte scope 3-utslipp, som omhandler indirekte utslipp i verdikjeden.
En viktig grunn til virksomheters utfordringer med bærekraftrapportering er at nøyaktig hva som skal dekkes av “bærekraft” er under-spesifisert, hvilke av virksomhetenes data som er relevant, hvordan data skal sammenstilles og hvem bærekraftrapportering er relevant for og hvordan.
Masteroppgaven utføres i samarbeid med Aneo AS (aneo.com, tidligere TrønderEnergi), et ledende nordisk fornybarselskap.
Målsetning: Dette masterprosjektet vil utforske utfordringer og løsninger knyttet til innsamling, behandling, og rapportering av bærekraftsdata for å møte de nye kravene.
Hovedveileder: Eric Monteiro, IDI/ NTNU
Medveileder: Kathrine Vestues, Aneo AS (kathrine.vestues@aneo.com)
Info om CSRD:
Problem Description: Despite the hype of AI development in recent years, true and impactful benefits of AI adoption in the public sector is still an unproven case. Reported adoption cases are often sporadic, random and lack the magnitude of impact at an infrastructure level technology change holds promise to. In this project, we set out to study the traits and characteristics of impactful AI-adoption in the public sector.
The thesis can involve:
OsloKommune: OsloKommune wants to get a 3rd party of up-and-coming students with state-of-the-art knowledge to analyse their newly established Oslo municipality’s AI factory. A factory consisting of modules that can be reused in new AI solutions. They want the students to familiarize themselves with the AI factory. Make an analysis, sketch and document the AI-factory, make recommendation on what can be done different. Students will work together with Oslo municipality’s centre of Excellence for Artificial Intelligence.
Value for Oslo Municipality:
Pool of Potential NTNU supervisors: Xiaomeng Su (technical), Casandra Grundstrom (sociotechnical)
Customer contact chatbots are increasingly used in the public sector. In Norway, municipalities, NAV, Skatteetaten and other public agencies are employing home-made chatbots as first line of communication with citizens. Some of these chatbots have been criticized because they can be perceived as excluding some citizen groups, and not be developed for the needs of the public sector.
You will do a literature analysis of the use of chatbots in the public sector. Afterwards you will design and conduct empirical study of how citizens use chatbots in their interaction with public agencies. Citizen groups and research questions will be developed in dialog with you.
This research requires a good understanding of platforms, boundaries, self-service, and the needs of the public sector. This understanding will be based on literature analysis and later on empirical data generated by you. As output from this task we expect empirical knowledge, but also recommendations about how to develop and use chatbots in the public sector.
Classifying animals in the wild in complex backgrounds is a challenging and open problem. The complexity increases with natural vegetation, varying environmental conditions, half-captured animal parts in frames, lightning variations, and so on. This project aims to classify not only the animals captured in the frame (i.e., elk, rabbit, deer) but also the background surroundings into snow, grass, trees, etc. The project entails developing deep learning models to label foreground objects (animals) and the background into distinctive categories. Elephant Expedition (EE), Snapshot Wisconsin (SW), Snapshot Serengeti (SS), and Camera catalog (CC) to identify animal species in the wild are some of the datasets that contain millions of images that can be used for training and testing deep learning models. Norwegian wildlife dataset collected by NINA is also available with us.
Digital competences, from basic to advanced, are required in an increasing number of workplaces. Various competency frameworks have been developed to specify which competencies are required. However, there is limited knowledge about how to design courses that are relevant for specific workplaces or categories of workers.
The task aims at designing and evaluating a playful and engaging toolkit that can help to involve learners in the co-design of their life-long learning courses and learning opportunities, taking into consideration the relevance of the content as well as the contextual constraints.
The toolkit is expected to take inspiration from board games and card-based co-design toolkits (like, e.g., tiles, see https://www.tilestoolkit.io/)
The specialization project is expected to focus on understanding life-long learning and co-design of learning activities. The work can then be continued with a master focusing on the design and evaluation of the co-design tool.
Contact the supervisor to share your ideas and know more about this task.
AI technologies such as generative AI have the potential to replace humans in some work areas and tasks. Successful deployment of AI requires that humans and AI agents find collaboration models that are satisfactory and provide value. This task will look at ways various types of AI are used in knowledge organizations and how knowledge workers cooperate with AI on a daily basis. Type of AI and type of work practices will be decided together with you.
This is an empirical research tasks. It means that you will use research designs such as case studies or design science to generate new knowledge about the problem area. The outcome of this tasks can be new empirical knowledge, new human-AI collaboration models, or design ideas for new interaction modes.
Electroencephalography (EEG) is an electrophysiological monitoring method to measure electrical activity in the brain. From noninvasive small electrodes that are placed along the scalp, EEG record spontaneous electrical activity in our brain. Analyzing EEG signal data helps researchers to understand the cognitive process such as human emotions, perceptions, attentions and various behavioral processes.
Optical motion capture systems allow precise recordings of participants’ motor behavior inside small or larger laboratories including information on absolute position. Movement tracking is useful in, for example, rehabilitation settings where tracking a person’s movements during training can give valuable information to provide feedback on whether an exercise is performed correctly or not. Traditional systems often use specialized sensors (Kinect, accelerometers, marker-based motion tracking) and are therefore limited in their area of application and usability. With advances in Machine Learning for Human Pose Estimation (HPE), movement tracking has become a viable alternative for motion tracking.Combining HPE-based movement tracking and EEG can provide the patient with more holistic feedback and help with progress in rehabilitation.
In this study, the students will combine EEG and HPE-based movement tracking, for an existing VR exergame. The EEG can be used to give the patient additional feedback on, for example, her/his attention level, movement intention, or cognitive load. For comprehensive analysis of bio-signals tracked by various sensors, movement and brain data need to be time-synchronized with VR contents. The pipelines that could be used are https://www.neuropype.io/ or https://timeflux.io/ based on LabStreamingLayer. They should work with our EEG, Unity out of the box, however, are not yet synchronized with HPE.
This project is in collaboaration with prof. Marta Molina at the Institutt for teknisk kybernetikk. The study is associated with Vizlab and the needed sensors and the basic training on how to use them (VR headset, EEG equipment) are provided.
Competence frameworks aim at providing an overview of the competencies that are required in different contexts and/or by different categories of people, for example specifying the (digital) competencies that teachers should have; all the competencies of researchers, …
Though these frameworks are useful at the policy level, they are not easy to be used by individual to reflect on their competencies and to develop an effective professional development plan.
This task aims at developing a tool to help users to playfully self-assess their competencies versus what is recommended by a framework and plan their professional development accordingly. The tool is expected to focus on playfulness, cooperation, and reflection.
The prototype will build around the “ResearchComp: The European Competence Framework for Researchers”, though it is expected to be more general and adaptable to different frameworks.
The task is expected to include design, prototyping and evaluation.
Contact the supervisor to share your ideas and know more about this task
Geographical Information Systems (GIS), such as ARCGIS, provide a platform for crowdsourcing information from a wide geographical area. This approach has been used to crowdsource geological and climate related content, as well as narratives about the specific locations. This project aims to create a crowdsourcing GIS platform that could contribute to enhancing the knowledge about places through sharing stories and interesting experiences that would showcase a place and contribute to providing a sense of a place. Furthermore, the use of Large Language Models (LLM) should be explored. The sub-tasks include:
Data-driven data science is attracting a lot of interest.
However, uptake into organizational practice is lagging significantly behind. Why?
The focus of this project/master is to supplment the possibilities provided by data-driven data science techniques with an empirical understanding of the conditions and circumstances for these techniques to be used in practice for consequential decision-making.
Empirical cases to study data science in practice we will have to discuss. Candidates include: the energy sector, healthcare
An example problem situation is the efforts to enhance the accountability and explainability of algoritms (XAI, explainable AI).
The project/master is part of SFI NorwAI (https://www.ntnu.edu/norwai) funded by the Norwegian Research council
If you are interested in aspects of modeling software and systems we can agree on a topic of your interest, as long as it is adequate for a Master's project. Topics may range from modeling of architectural aspects of systems, application of formal methods (for example model checking), definition of Domain-Specific Languages (DSL) and metamodels, etc. Just drop me a mail at leonardo.montecchi@ntnu.no to start the discussion.
If you are interested in software and system reliability/safety/security we can agree on a topic of your interest, as long as it is adequate for a Master's project. Topics may range from modeling of reliability/safety/security properties at architecture level, application of formal methods (for example model checking), fault injection, testing, etc. Just drop me a mail at leonardo.montecchi@ntnu.no to start the discussion.
Generative AI (GenAI) is introducing many new concepts in software development. The objective of this project is to define a metamodel and DSL for concepts related to LLMs and generative AI. The DSL will be used to document which GenAI components are used in a software system and how.
A metamodel has been defined in different ways: a model of a model; a definition of a language; a description of abstract syntax; a description of a domain [1]. Metamodels are one of the means of mapping concepts and relations within a certain domain, and they are often used as part of the process of creating a Domain-Specific Language (DSL) [2].
The objective of this project is to define a metamodel and DSL for concepts related to the use of LLMs, and GenAI in general, in software engineering. GenAI is regarded as a disruptive change in software developmen [3]t, which is completely changing the way software is developed and it is introducing many new concepts in software development. However, there is not yet a standardized way to document the use of GenAI in the software architecture of systems.
The long-term research objective linked to this project is to define a methodology to specify and document GenAI components in software and system architectures.
Økt utenforskap blandt unge belyses som et økende samfunnsproblem av både forskere og offentlige ansatte. Funn fra forskning tyder også på at unge har manglende forståelse om hvordan velferdstjenester er organisert, noe som hinderer de å få hjelpen de behøver, og jo lengre man står utenfor, desto større utfordringer møter man i forsøk på å komme i jobb.
I dette prosjektet ønskes det å jobbe direkte med unge (mellom 16-25) for å utvikle digitale verktøy og metoder som kan gi unge en reell stemme og styrket rolle i utformingen av velferds- og helsetjenester. Dette vil ikke bare kunne gi unge bedre støtte og økt myndighet mens de navigerer det offentlige hjelpeapparatet, men også kunne bidra til kunnskap om hvordan tilbudet av velferdstjenster kan bedre tilpasses og potensielt lette overgangen for unge til et aktivt liv. Samtidig er det viktig at personvern sikres.
Det er en fordel om du har interesse og kunnskap om empirisk, kvalitativ forskning og designarbeid. Oppgaven kan leveres på norsk eller engelsk, men god norsk beherskelse er nødvendig.
Kontakt Tangni Dahl-Jørgensen, tangni.c.dahl-jorgensen@ntnu.no, for mer informasjon om prosjektet.
How can we help informatics students to get a better understanding of the impact of the technology they develop? This task will focus on designing a playful approach for learning about sustainability of IT solutions and how to integrate sustainability awareness in IT design.
The task will start from the co-design toolkit Tiles (https://www.tilestoolkit.io/) to modify it to promote reflection on sustainability. Students might choose to develop a physical, online, or hybrid toolkit.
Previous work has been done in the group about teaching about sustainability to informatics students and provides a good starting point, still giving freedom to shape your work.
A design system is a collection of documented user interface (UI) elements, visual guidelines, and design principles that other people can use or refer to when designing digital products. Notable examples of design systems are Google’s Material Design and Apple’s Human Interface Guidelines. The main benefits of a design system are (1) improving design consistency across different digital products since it serves as a single source of reference that other people can refer to and (2) reduce the development work since UI designers do not need to design everything from scratch.
As land-based bipedal and quadrupedal robots become more capable, there is a growing need for graphical user interfaces (GUIs) that can be used to operate them remotely. This project will focus on creating a design system that can be used for developing GUIs for operating bipedal and quadrupedal robots.
Key activities in the project include:
This project is part of the OpenRemote project, which aims to create open-source design systems for remotely operated machines. The student will receive support from the partners affiliated with the project.
The project will be co-supervised by Dr. Taufik Akbar Sitompul (Department of Design, NTNU), who is also the initial contact person.
In today's digital age, it's crucial to manage important documents like diplomas and licenses securely and efficiently. Traditional methods of handling these documents are either outdated i.e. paper based or fragmented and can pose privacy and security risks. However, with new web 3.0 technology like self-sovereign identity and digital wallets, there's an opportunity to improve how we manage identity documents. This proposal aims to introduce a digital wallet platform that can securely store various identity documents such as academic diplomas, driving licenses, boat licenses, flying licenses, shooting licenses, and so on. By using advanced technology, this platform will make it easier for users to access and manage their documents while ensuring their privacy and security.
The main goal is to create a digital wallet platform (mobile + web) that can safely store and manage a wide range of identity documents. This platform will serve as a centralized place for users to keep their important identities documents, reducing the need for physical copies and/or fragmented storage methods. Additionally, the platform will include additional layers of security features, like encryption and biometric authentication, to protect users' sensitive information.
Furthermore, the envisioned platform will be user-owned, user-friendly, with easy navigation and integration with other digital systems. Users will be able to upload, organize, and share their documents with relevant authorities quickly and securely. The platform will also provide automatic reminders for document renewals, helping users stay compliant with regulations.
Here's a summary of the proposed tasks:
Summary:This project aims to design and test the feasibility of an inclusive mobile application platform to support the mental health of mothers caring for children with intellectual disabilities. The application platform will offer AI powered culturally adapted, low-literacy-friendly tools, including visual resources, local-language content, stress management support, and private connections to therapists - tackling barriers of stigma, access, and cost.
Activities:
Conduct needs assessments through focus groups and surveys
Develop a basic prototype with AI-driven stress management and therapist directories
Test usability, acceptability, and technical performance
Collaborate with experts in health innovation and mobile app development
Collaborations:
Namrata Pradhan (namrata.pradhan@ntnu.no) from the Department of Mental Health will serve as the product owner, providing support for refining project requirements.Surya Kathayat from the Department of Computer Science will act as the project supervisor.
Reducing energy consumption in large distributed systems, such as maritime operations, requires effective collaboration among multiple stakeholders. While each stakeholder has their own goals, they also have a common goal, which is reducing the ecological impact of their operations.
This project will focus on designing user interfaces that support coordination between on-board and shore-based operators in terms of environmental feedback. By facilitating more effective information exchange and decision-making, the proposed user interfaces should help both groups of stakeholders to make actions that lead to reduced energy consumption.
Key activities in this project will include:
This project is part of the OpenZero project, which aims to reduce carbon emissions in maritime sector. The student will receive support from the partners affiliated with the project.
Cranes are traditionally controlled by operators who work inside the crane’s cabin. Although this operation mode is still common nowadays, a significant amount of progress has been made to move operators away from their cranes, so they would not be exposed to hazardous situations that may occur in their workplace.
As there are many types of cranes, this project will focus on offshore cranes. The key activities in the project include:
Web3 technologies open new and interesting possibilities in games development! In this project a gaming platform will be developed that allows multiple players to play, learn and/or earn points or digital assets.
Playing activities can be different depending on the chosen game (Student shall propose a game!!) For example
The platform will also encourage users to create games or game contents and reward them!
Mechanisms for exchanging rewarded points with other digital assets shall also be proposed and implemented!
Possible research aspects:
Platform model used by many big tech companies often leads to centralization of power and unfair treatment of users. In this task we want to investigate the concept of platform fairness, the relationship between platform core and periphery, and how new knowledge and design ideas can lead to more fair platforms.
This is an empirical research tasks. Empirical data can be collected from existing platforms such as Foodora or Uber, but also from organizations such as temp agencies. The exact field for empirical research will be decided together with you.
Nyankomne til Norge må gjennom mange byråkratiske søknadsprosesser som krever en høy grad av systemforståelse. Samtidig er dette er en diversifisert brukergruppe som har ulike utgangspunkt mtp alder, utdanningsnivå, og språklige og digitale ferdigheter, noe som kompliserer tilgang og forståelse til informasjon som er nødvendig for å få varig oppholdstillatelse og et godt liv i Norge.
Dette mastergradsprosjektet går ut på å samle brukerperspektiver om flyktningers og immigranters utfordringer i innhenting av informasjon om eller bruk av offentlige tjenester, og utføre deltakende design/codesign aktiviteter i utviklingen av digitale støtte, f.eks. i form av samling av ressurser, forenklet informasjon innhenting, gamification for barn og unge etc. Her er det også mulighet for å samarbeide med offentlige, kommunale tjenestetilbydere og/eller frivillig organisasjoner som jobber med flyktninger til daglig.
Oppgaven kan leveres på norsk eller engelsk, men det kan være en fordel med grunnleggende norskforståelse.
Prosjektet krever at du er interessert i menneskelig aspekter ved utvikling av nye teknologiske løsninger og kan tenke deg å jobbe sammen med mennesker i en vanskelig livssituasjon på en måte som er myndiggjørende og ivaretar brukerens medvirkning i design og utvikling.
Stadig mer av oljeutvinningen på norsk sokkel foregår på svært utilgjengelige felt på store havdyp. Samtidig benyttes stadig oftere ubemannete subsea anlegg som ligger på havbunnen men fjernestyres fra land eller nærliggende rigg. Sensorbasert informasjon (bl.a. temperatur, trykk, seismikk, magnetiske og strålingsegenskaper til bergformasjonen) spiller stadig større rolle i alle fasene: leting ("exploration"), boring ("drilling"), logging og produksjon.
Det (nesten) eneste man har er sensorbasert informasjon - men sensorene er upålitelige. De slites (levetid typisk et drøyt år), de er ikke kalibrert osv. Så hvordan skal operatørene ta gode beslutninger gitt høy usikkerhet? Hvilken rolle har nye IT verktøy?
The aim of this thesis is to explore the current practices in Norway regarding diversity aspects (including but not limited to: identity, gender, race, ethnicity, neurodiversity, sexual orientation, age, and physical abilities) within software development processes and products. The research will be conducted through empirical software engineering methods.
The student(s) will:
Examine how various companies incorporate diversity into their organizations and projects.
Investigate how software development teams reveal diversity in their projects and products.
To achieve this, the student(s) will:
Conduct interviews with customers and students (autumn semester).
Analyze materials produced by groups in the TDT4290 course.
Perform report and software analysis.
Organize workshops with companies and students to validate their findings.
The goal is to understand how diversity aspects impact software development teams and identify effective guidelines to disclose diversity in software development processes.
Cico, O., Jaccheri, L., Nguyen-Duc, A., & Zhang, H. (2021). Exploring the intersection between software industry and Software Engineering education-A systematic mapping of Software Engineering Trends. Journal of Systems and Software, 172, 110736.
A.R. Gilal, J. Jaafar, S. Basri, M. Omar and M. Z. Tunio, "Making programmer suitable for team-leader: Software team composition based on personality types," 2015 International Symposium on Mathematical Sciences and Computing Research (iSMSC), Ipoh, Malaysia, 2015, pp. 78-82, doi: 10.1109/ISMSC.2015.7594031.
L. Gren and P. Ralph, “What makes effective leadership in agile software development teams?” in Proceedings of the 44th International Conference on Software Engineering, 2022, pp. 2402–2414.
Y. Wang and M. Zhang, “Reducing implicit gender biases in software development: does intergroup contact theory work?” in Proceedings of the 28th ACM Joint meeting on european software engineering conference and symposium on the foundations of software engineering, 2020, pp. 580–592
Gunatilake, H., Grundy, J., Hoda, R., & Mueller, I. (2024). The impact of human aspects on the interactions between software developers and end-users in software engineering: A systematic literature review. Information and Software Technology, 107489.
https://sbs.idi.ntnu.no/
This project will build upon the findings from humus.name by creating a 2D billboard that will be deformed to match the silhouette of a highly complex object from any angle. The goal is to make sure that the biggest triangle of the billboard fills as much area as possible, as this is linked to hardware rendering efficiency. To achieve this goal, the project must utilize some techniques like simple neural networks that deform a billboard to the correct shape or other techniques.
If time allows, the project will also explore and compare other methods that can achieve the same results, and or investigate how depth and textures can be applied to these silhouettes to achieve a convincing 3D appearance.
Some other relevant sources: DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation Instant Neural Graphics Primitives with a Multiresolution Hash Encoding Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes Neural Subdivision Requirements
Supervisors: Theoharis Theoharis (NTNU)
With the increasing reliance on mobile applications in daily life, concerns about their energy consumption, data usage, and environmental impact have grown significantly. Mobile apps are ubiquitous, but their development and usage can have significant impact on sustainability (both social and environmental). Mobile apps contribute to carbon footprints through intensive CPU processing, network activity, and inefficient coding practices, often resulting in excessive battery drain and resource waste. Developing eco-friendly mobile apps is crucial to reducing these impacts while enhancing performance and user experience.
This MSc thesis aims to develop practical guidelines for designing mobile applications that minimize energy consumption and resource use throughout their lifecycle. This thesis aims to bring a comprehensive evaluation of the areas/ characteristics of mobile apps that contribute to reduce impacts on sustainability.
Suggested research question for this study could be: How do software development practices contribute to sustainable mobile applications? Which design principles and coding practices/ architectural strategies can developers adopt to minimize the energy consumption and resource use of mobile applications?
The research starts with an extensive literature review to existing research on eco-friendly mobile app development and practices. Then, the thesis will carry out a benchmark analysis of existing apps and analyze energy consumption patterns across a sample of widely used mobile apps (e.g., social media, navigation, and gaming apps) using existing tools. Based on qualitative interviews with developers and industry experts (mobile app developers, UX designers, and software engineers), data will be gathered to provide insights into current development practices and the challenges of balancing performance with energy efficiency. Based on findings from the benchmark analysis and interviews, expected outcome of the thesis is to propose practical guidelines that will empower developers to design mobile applications with improved energy efficiency, reducing their environmental footprint. The will be part of the activity at CESICT - Center for sustainable ICT
Since its maiden release into the public domain in 2022, ChatGPT garnered more than one million subscribers within a week. The generative AI tool ⎼ChatGPT took the world by surprise with its sophisticated capacity to carry out remarkably complex tasks. The extraordinary abilities of ChatGPT to perform complex tasks within the field of education have caused mixed feelings among educators as this advancement in AI seems to revolutionize existing educational praxis. This topic will investigate the opportunities and challenges of generative AI (through implementing a case study) and offer recommendations on how generative AI could be leveraged to maximize teaching and learning. The goal is to identify how these evolving generative AI tools could be used safely and constructively to improve education and support students’ learning.
The project aims to revolutionise university credit management by integrating blockchain with existing educational systems using Blackboard and Inspera APIs. EduWallet ensures secure, efficient record-keeping and easy credit transferability across institutions.
API Integration:
Smart Contracts for Enrollment: Simplifies the course enrollment and withdrawal processes using blockchain to guarantee secure, verifiable transactions.
Credit Transfer: Employs a blockchain ledger for secure storage and seamless transfer of credits, enhancing student mobility.
Real-Time Verification: Offers instant verification of academic records, accessible by employers and institutions, ensuring accuracy and preventing fraud.
Token-Based Incentives: Rewards students with tokens for academic achievements, redeemable for special privileges like exclusive workshops or priority course enrollment.
Analytics Dashboard: Provides a real-time dashboard for students to monitor credits, course statuses, and rewards eligibility.
Web app (and or Mobile app), Blockchain, API integrations, Database, Security
Most organizations are currently undergoing different types of digitalization and digital transformation. The majority of these digitalization processes are initiated and led by top management, in collaboration with external consultants and technology vendors. This means employees and their in-depth knowledge of their work practices are often excluded from these processes because the employees don't have the time, the skills, or the autonomy to participate effectively. The consequence is often failed or costly projects.
In this task we ask you to investigate successful and failed attempts to employee participation in digital transformation, understand the challenges, and recommend best practices for future digitalization initiatives.
This is an empirical research task that is grounded in latest research in digital transformation. We expect that you employ rigorous methods to analyze past research, plan research studies, and conduct empirical studies in organizations that are undergoing digital transformation. The specific case organization and the specific focus of the task will be further developed in collaboration with you.
This task requires that you have a good understanding of, and are interested, in empirical qualitative research. Working language for this task is Norwegian. The thesis can be written in Norwegian or English but we recommend English. Please contact Babak before you select this task.
Supervisors: Michail Giannakos, Giulia Cosentino
Place: LCI Lab: https://lci.idi.ntnu.no/
Suitable for: Two students
You can see a video of the current prototype here: https://www.youtube.com/watch?v=3tpqfkSDkE8
IntroductionThe field of multisensory technology allows the learner to interact beyond usual input devices (e.g., keyboard and mouse) with new forms of natural user interfaces such as full body interaction, gestures, vocal, etc., and provide multisensory stimulations (e.g., through sounds, lights, visuals) to support users' learning. Moreover, the number of sensors provided by the multisensory technologies supports the collection of multimodal data that add sensemaking and predictive powers to previous forms of analysis. The goal of this project is to explore how Motion-Based Technology (MBL) and Gen AI capabilities (e.g., LLMs and Multimodal LLMs) may facilitate children’s meaningful STEM learning.
Thesis DescriptionIn the first step, the student needs to review the literature and familiarize themselves with the adaptive learning models, databases and processes, and MMLA, as well as the context of multisensory technologies. Then, the candidate, supported by the best practices found and adapted from the literature, will define the model able to suggest the activity and interaction the learner will have to perform with a given content and interaction paradigm based on the MMLA. Overall, the project aims to: 1) identify aspects of Motion-Based Technology (MBT) and analytics that support children’s learning, with a focus on math, 2) develop a set of practices, functionalities and technological interfaces that materialize those aspects, and 3) evaluate and refine those practices and the technological interfaces.
RequirementsThe ideal candidate will have a background in data science and modeling, and experience with using/integrating LLM technology to applications. Solid back-end programming skills (Python or C# or JavaScript) and an interest in hands-on development and experimentation is also a requirement.
Programming skills: Python or C# or JavaScript
Objective: To develop and evaluate AI-based analytics models for tracking and enhancing learner engagement and performance in simulations, focusing on STEM.
Description: This research aims to develop AI-based analytics models to enhance the effectiveness of laboratory simulations in online learning. The study will focus on tracking learner interactions within simulations developed using Articulate Storyline 360, generating actionable insights into student engagement, skill acquisition, and performance trends. These insights will inform real-time interventions and instructional design strategies to improve student outcomes. A case study on STEM education.
Objective: To create gamified simulations with adaptive AI mechanics in Articulate Storyline 360 and evaluate their effectiveness in enhancing motivation and promoting deeper learning.
Description: This topic investigates the use of gamification in simulation-based learning environments developed with Articulate Storyline 360. The study will incorporate AI elements such as performance tracking and adaptive game mechanics to personalize learning experiences. By examining learner motivation, engagement, and retention rates, the research will assess how gamified simulations can address challenges in online education and promote deeper learning.
The focus of this thesis is to develop a system to help the novices in programming while debugging. One of the ways to provide help is to learn from the expert about how to look at the program while finding the bugs in the code. This way of providing help is called Expert’s Movement Mapping Examples. Most of the efforts in this direction include the use of expert’s gaze in the problem space. In this thesis the student(s) will exploit the use of dialogue as well as the gaze of the expert.
Thesis DescriptionIn a first step, the student(s) will design and implement the gaze-enabled feedback tool. Afterwards, they will conduct a small user study in order to test the usability of the system with a small number of students. Once the usability of the system is established (with the last changes in the system), the student(s) will conduct a larger user study to evaluate the effectiveness of the system. Finally, the candidate(s) will analyse the collected data and write up his/her thesis.
RequirementsThe ideal candidate will have a background in system design. Solid programming skills and an interest in hands-on development and experimentation is also a requirement.Programming skills: Python/Java.
Online meeting platforms are beginning to offer their raw image and sound data for processing via SDKs; for example Zoom:
https://developers.zoom.us/docs/video-sdk/linux/raw-data/
The idea behind this project is to develop facial processing tools based on Zoom's SDK.
Requirements:
Knowledge: Linux, Python, C/C++
Courses: TDT4195 (Visual Computing Fundamentals), TDT4230 (Graphics & Visualization), or equivalent.
Supervisor:
Prof. Theoharis Theoharis, Dr Antonios Danelakis IDI, NTNU theotheo@ntnu.no
Read also: Writing a Master's Thesis in Language Technology
Large language models form the basis of almost all currently topical AI research, making it vital to identify and rectify different types of demographic biases in those models (be that bias based on gender or sexual identity, or on cultural, ethical or social background, etc.). This has triggered intense research on fair representation in language models, aiming both at building and using unbiased training and evaluation datasets, and at changing the actual learning algorithms themselves. There is still a lot of room for improvements though, both in identifying and quantifying bias, in developing dibiasing methods, and in defining bias as such.
Multiple movements like opening or closing the hand, grasping, or showing the palm can be decoded from the EEG signals recorded while attempting to do those movements. The decodified movements can serve for multiple purposes. For example, in neurorehabilitation they can be used to provide feedback to a patient that is performing therapy to recover hand movements after stroke, and in brain-computer-interfaces to generate outputs that control an external device such as home appliances, computer games, toys.
The objective of this project consists of decoding movement intentions by combining low-density EEG and source reconstruction (estimation of the activity inside the brain from the electrodes on the scalp). The project involves recording EEG signals for multiple participants, analyze and build offline/online classifiers using state-of-the-art machine/deep learning algorithms and developing software games such as the one in this link: Video semi final (youtube.com).
Preliminary results are now published here: https://link.springer.com/article/10.1186/s40708-024-00224-z
This project will provide a foundation to develop wearable solution based on few electrodes that can be later use in neurorehabilitation therapies. The project is done in collaboration with Marta Molinas at the cybernetics department
Visuospatial neglect are commonly experienced neuropsychologicalconditions affecting the contralesional side in post-stroke patients, leavingpatients with egocentric or allocentric perceptual problems. Diagnostic tools forvisual neglect include the apples test, balloons test, and bells cancellation test.All administered on paper. While psychometrically sound, these tests areadministered in an overtly clinical setting, lacking depth as a test parameter,only allowing for crude temporal data collection and gaze observation, andalso being limited in spatial scope to the size of the paper. By having a limitedset of test parameters, the status quo represents a barrier to advancing ourunderstanding of the mechanisms behind visouspatial neglect and its effect ineveryday settings of the patients. To mitigate these limitations, a VRenvironment is being developed for assessment of neglect by utilizing low-cost,off-the-shelf, VR headsets with integrated eye trackers, along with a custom-developed and highly flexible virtual environment, where the test parameterscan be altered based on the needs of the clinician.This master project aims to bring flexible data visualization into theaforementioned VR environment. For starters, the students will look at usingPython to generate graphs/plots of gaze data which has previously beencollected from the VR environment, and then to display these plots directly inthe VR environment itself (i.e. the gaze plot of the person during the test).Next, one can classify the fine-grained gaze data into more meaningful unit,such as fixation or saccade, and come up with more relevant staticalinformation to the clinician. Eventually the information should be visualized inan easy to comprehend manner to the clinician.This work collaborates with Department of Acquired Brain Injury, St. Olav'sHospital.
Supervisor: Alexander Holt
Co-supervisors: Tor Ivar Hansen, Xiaomeng Su
As a result of increasing digitalization, IT tools are used in multiple contexts by users with multiple backgrounds, and are intertwined in complex ways with everyday practices. The complexity of the context of use arises a number of ethical issues for software developers and users, including e.g., algorithmic biases, privacy of personal data, addictive design, concerns about sustainability,…
This specialization/master project aims at the development and evaluation of a game to facilitate reflection and discussion on ethical issues related to IT tools among informatics students.
Previous work has been done in the group about teaching about ethics to informatics students and provides a good starting point, still giving freedom to shape your work.
Digital transformation is influencing all the workplaces. Not always the digital transformation that is envisioned is successful, as witnessed by, for example, the challenges connected to the introduction of the Helseplatformen. One aspect that is often under-estimated is connected to the competences that are needed to workers to participate to the digital transformation in a meaningful way.
This task aims at designing a game to help workers to understand the space of possibility of new technologies in their workplace and their impact. Focus will be on systemic and critical thinking. Students are welcome to define, in cooperation with the supervisor, specific areas of interest, with respect to the learning objectives of the game, and game genre and technology.
AI tools are increasingly used in different workplaces. This project focuses on the use of AI for supporting creativity, with focus on ethical and responsible use. More specifically, the task is centered around the design and evaluation of a game to learn about how to use AI for supporting creativity and innovation.
Students are welcome to define, in cooperation with the supervisor, specific areas of interest, with respect to specific target groups (e.g. specific workplaces), the learning objectives of the game, and game genre and technology.
The specialization project is expected to focus on understanding how AI can be used for supporting creativity. The work can then be continued with a master focusing on the design and evaluation of the game.
This project aims to explore the transformative impact of generative AI on arts by examining how it disrupts traditional processes of artistic creation, audience engagement, and the global art market.
Generative AI will fundamentally disrupt arts. While it might pose a threat to artistic integrity, the uniqueness of artworks, and traditional art market mechanisms, it also affords opportunities to enhance creative processes, the mutability of artworks, and multilateral interactions with the audience. Generative AI can assist in the artist’s creative process through idea generation, style transfer, and real-time experimentation.
The project consists of a literature review in the area which will help narrow down the topic. This will directly impact the empirical part of the thesis, that will also be based on the students’ interests. The second phase includes designing an empirical study and collect data employing qualitative and/or quantitative methods. For example, the project may focus on developing AI tools (e.g., custom get) that support the creation process. In the final phase, the students will analyse the collected data and write up their thesis.
As software systems evolve, many organizations struggle with outdated documentation and legacy diagrams that describe system architectures, workflows, or business processes. These diagrams, often created using UML, flowcharts, or proprietary notations, are difficult to translate into modern programming languages. Manual conversion is time-consuming and error-prone, highlighting the need for automated solutions. Recent advancements in Generative AI (GenAI), particularly large language models (LLMs) and vision-based AI models, offer promising approaches to automate the conversion of legacy diagrams into functional code. This thesis aims to explore how GenAI can be used to interpret, analyze, and generate source code from legacy diagrams, reducing the effort required for software modernization.
Read also: Writing a Master's Thesis in Computational Creativity
To be creative, we need to produce something which is new, meaningful and has some sort of value. Generative AI is able to support humans in creative processes, but to also itself be creative or to assess if an idea or a product is creative. A computational creativity project can investigate any creative field matching the interests and backgrounds of the student or students (language, design, music, art, mathematics, computer programming, etc.), and concentrate on one or several aspects of computational creativity, such as the production, understanding or evaluation of creativity, or on computer systems that support human creativity.
In particular, the project can investigate the transitions between different creative artforms, e.g., generating music or images based on textual input (as in Stable Diffusion models), generating music based on images or text, or generating text based on music or images.
Domain-Specific Languages (DSLs) are tailored programming or specification languages designed for specific problem domains, such as hardware description (VHDL, Verilog), data analysis (R, SQL), and automation (BPMN, Terraform). Developing and maintaining DSLs requires domain expertise and significant effort in syntax design, compiler construction, and user documentation. Generative AI (GenAI) models, particularly large language models (LLMs) like GPT-4, have demonstrated capabilities in code generation, program synthesis, and natural language understanding. This thesis aims to explore how GenAI can be leveraged to support DSLs in various stages, including DSL creation, code synthesis, debugging, and usability improvement.
As software becomes more complex, its energy consumption and environmental impact grow significantly. The concept of green coding focuses on writing efficient, energy-saving code to reduce carbon footprints. However, optimizing code for sustainability is challenging, requiring expertise in energy-efficient algorithms, compiler optimizations, and hardware-aware programming. Generative AI (GenAI) presents a promising solution by automatically generating, refactoring, and optimizing code for better energy efficiency. AI-powered tools can suggest improvements, reduce redundancy, and enhance performance while maintaining functionality. This thesis will explore how Generative AI can assist developers in writing energy-efficient code and evaluate its effectiveness in real-world scenarios.
As digital technology advances, the environmental impact of web development has become a growing concern. Websites contribute to carbon emissions through energy-intensive processes such as server hosting, data transfer, and resource-heavy design elements. Green web design aims to reduce these environmental impacts by optimizing performance, minimizing resource usage, and improving accessibility. Generative AI (GenAI) presents a promising opportunity to enhance green web design by automating sustainable coding practices, optimizing resource efficiency, and providing AI-driven recommendations for eco-friendly development. This research explores how GenAI can assist in creating sustainable web solutions while maintaining usability and performance.
Mathematical word problems are a fundamental aspect of education, requiring both natural language understanding and problem-solving skills. Traditional methods for solving such problems rely on rule-based approaches or symbolic reasoning, but recent advances in Generative AI have opened new possibilities for automated problem-solving. Large language models (LLMs) and neural networks can now interpret, reason, and generate step-by-step solutions for complex mathematical problems.
Co-supervisor: Prof. Michail Giannakos
This master thesis focuses on leveraging Generative AI to solve visual math problems. This research aims to explore how AI can interpret and reason through mathematical problems presented in visual formats, such as graphs, geometric diagrams, or handwritten equations.
This master thesis explores the role of Generative AI (GenAI) in Software Usability Testing. This research will investigate how AI can enhance usability evaluation processes, automate testing tasks, and improve user experience (UX) assessment in software development.
Evaluate the ability of Generative AI, such as Large Language Models (LLMs), to understand and work with modeling tasks for systems and software engineering. This project can be customized in directions, such as generating code from models, generating models from natural language descriptions, modifying existing models, etc.
Semi-formal models (e.g., UML diagrams) are used in different tasks of system and software engineering, for example for documenting the system and software architecture. While modeling tasks are a creative effort, they also require much manual effort and they are typically error prone and difficult to be maintained. This project aims to exploit the potential of generative artificial intelligence (GenAI) to simplify and automate modeling tasks in software and systems engineering.
GenAI have seen a dramatic increase in popularity in the last few years, after the release of ChatGPT in 2022 and other generative models later. GenAI is having a major impact in many disciplines, and it is considered a major disruption also in Software Engineering and Systems Engineering tasks.
Research on the use of GenAI for software engineering tasks is emerging, for example for code refactoring tasks, writing test cases, etc. The objective of this project is to investigate the use of GenAI for modeling tasks in software and systems engineering. This project can be adapted to focus on different kind of modeling tasks and different kind of system models. Some examples include:
The long-term research objective linked to this activity is to simplify modeling tasks in software and systems engineering, through the use of GenAI.
Git brukes i de fleste programmerings- og prosjektfag på NTNU og andre universitet. I dette prosjektet skal vi undersøke hvordan vi kan monitorerer og automatisere tilbakemeldinger og vurderinger ved hjelp av generativ KI. Målet er en pedagogisk bruk av generativ KI som er godt integrert i arbeidsflyten eks ved bruk av GitHub actions og at faglærere har enkel tilgang til data om prosjektene i egne dashboards.
Like the goods they transport, ships will eventually become waste and need to be broken down properly. The process of ship dismantling involves various activities, and one of them is inspecting the ship to be dismantled. Such inspection is required to ensure the area to be cut does not contain materials and gases that are harmful for workers who will dismantle the ship.
There is an increasing demand for using drones to inspect ships, as drones can reach higher structures and enclosed spaces that are difficult to reach by human inspectors. Since ship inspectors usually do not have experience with drone operations, there is a need for having graphical user interfaces (GUIs) for remote ship inspections, which are also user friendly for people without any experience using drones.
This project will be carried out as part of the SHEREC project, which aims to improve safety in the ship-breaking process through digitalization and deployment of robots. The students will receive support from the partners affiliated with the project.
Like the goods they transport, ships will eventually become waste and need to be broken down properly. The process of ship dismantling involves various activities, and one of them is to cut the ship’s hulls. Currently, hulls are cut manually by workers who use scaffoldings or lifted by cranes. The current practice is less safe, as workers are exposed to any accidents that may happen in the cutting area.
There is an increasing demand for using magnetic crawler robots for cutting ship hulls to prevent workers from working at heights. Since cutting workers usually have no experience with robotic systems, there is a need for having graphical user interfaces (GUIs) for operating the magnetic crawler robot, which are also user friendly for people without any experience using robotic systems.
Like the goods they transport, ships will eventually become waste and need to be broken down properly. The process of ship dismantling involves various activities, and one of them is to cut the ship internally. Currently, internal parts of a ship are cut manually by workers. The current practice is less safe, as workers are exposed to any accidents that may happen in the cutting area.
There is an increasing demand for robotic systems, such as mobile robotic arms, for cutting ship internally so that workers are not exposed to any hazards that may exist in the cutting area. Since cutting workers usually have no experience with robotic systems, there is a need for having graphical user interfaces (GUIs) for operating the mobile robotic arm, which are also user friendly for people without any experience using robotic systems.
eveloping countries have limited resources for healthcare delivery hence need to make the most of resources available.
This project/ thesis is linked to the Hisp/ DHIS2 (www.dhis2.org/) initiative.
Based on open source software, Hisp aims at increasing the efficency and quality of health services by enhancing the necessary reporting of health status. Mobile technologies are crucial as, even when roads and electricity is patchy, there are mobile phones.
The approach of Hisp is pragmatic: rather than elaborate, complex 'perfect' solutions, Hisp provides simple and robust ones that have a realistic chance of uptake.
Hisp, across Africa and Asia, is implemented in about 80 countries in varying degree of completion. It is one of the world's largest systems serving patients in the Global South, measured by size of the caption population.
The project/ thesis involves empirical fieldwork in Africa or Asia on selected services of the Hisp portfolio. The purpose of the work is to identify requirements and subsequently help implement these as part of the evolving portfolio of Hisp software.
The Hisp project is managed by Univ of Oslo. This project/ thesis will be in collaboration with the UiO team.
Previous dissertations (master, Phd) and reports from the UiO archive are found here: https://www.duo.uio.no/discover (search using "DHIS2" as keyword)
A digital twin is defined as a virtual representation of a physical asset, or a process enabled through data and simulators for real-time prediction, optimization, monitoring, control, and informed decision-making. This project collaborates with prof. Adil Rasheed from Institute of cybernetics. Example of digital twins relevant to this project are: an autonomous aquarium or greenhouse, an experiment of soil movement and an experiment of overload prevention in electric cables. The master thesis will focus on the development of an VR environment for visualizing of and interacting with the digital twin.
This thesis topic examines how AI systems and broader digital transformation initiatives can be designed, developed, and deployed in ways that prioritize human values and social well-being while ensuring business value. Students can investigate this from various angles (e.g., organizational, technical, or user-focused) and in multiple settings (e.g., healthcare, government, education, or business). Different research methods (e.g., quantitative surveys, qualitative interviews, case studies, or design science) may be employed to explore stakeholder engagement, policy implications, or innovative technical designs.
Multiple students/teams of students can take this topic depending on the interests and skills.
Send me an email explaining why this is interesting/relevant for you.
Related works:
Pappas, I. O., Mikalef, P., Dwivedi, Y. K., Jaccheri, L., & Krogstie, J. (2023). Responsible digital transformation for a sustainable society. Information Systems Frontiers, 25(3), 945-953.
Schmager, S., Pappas, I. O., & Vassilakopoulou, P. (2025). Understanding Human-Centred AI: a review of its defining elements and a research agenda. Behaviour & Information Technology, 1-40.
Contemporary works on Human-Centered AI (HCAI) focus on creating AI systems that amplify and augment rather than displace human abilities. HCAI seeks to preserve human control in a way that ensures AI meets our needs while also operating transparently, delivering equitable outcomes, and respecting privacy. AI systems function in diverse spaces (e.g., social, work, and classroom) alongside traditional interactions and activities. Therefore, it is expected that humans and AI will complement each other, stand by each other, and engage in a process of co-learning, co-creation, and co-evolution. Such a process is necessary for combining the strengths of humans and AI and reinforcing each other to achieve Hybrid Intelligence (HI). Unlike traditional AI, designed to operate independently in performing tasks that typically require human intelligence, such as perception and learning, HI involves active collaboration between humans and machines. Thus, further work is needed to understand and design appropriate HCAI technology, with a particular focus on how teachers can work together with AI tools to synergistically combine their strengths to reinforce efficient, and ethical use of technology.
In this topic, the augmentation perspective and the concept of HI will be used to guide this work. The candidates will engage with the design (co-design or participatory design) of learning services (e.g., interfaces or other artefacts) to showcase the challenges and opportunities of hybrid human‐AI learning technologies. The six levels of automation model will be used to identify the roles of the various AI users (e.g., learners, teachers). The transition of control between teacher or students and technology needs to be articulated at different levels and related to the augmentation perspective.
For this topic, there is an option to collaborate with an EdTech company called LearnLab (see: https://www.learnlab.net/en/). LearnLab's platform includes innovative web applications like Colab, Storylab, Idealab, Medialab, and Mylab. These tools support everything from interactive teaching and brainstorming to multimodal storytelling and the production of videos, podcasts, and formative assessments. Through Learnlab’s learner-focused AI, both teachers and students receive personalized support and formative feedback, with the goal of enhancing the learning outcomes and saving teachers' time.
The nature of decision-making is changing drastically, both in personal lives and in the business sphere. An increasing amount of decisions are now based on insight that is generated through analytics. Despite this, often individuals are faced with cognitive-overload, conflicted views, or biases that result in non-adoption of insight. This project will be done in collaboration with the Big Data Observatory (https://www.observatory.no) and involve designing a study protocol and collecting and analyzing neurophysiological data (eye-tracking and electroencephalography) from study participants. This will be done with the help of an expert in such tools.
The field of co-design (also known as participatory design) develops methods and tools to facilitate the inclusion of people of diverse ages, backgrounds, and disciplines in the development of IT products such as smartphone apps, games, and service platforms for citizens and businesses. Participation encompasses all the different stages of the design process: from the analysis of requirements to ideation, prototyping, and technology adoption. Co-design activities were usually performed in the context of in-person workshops facilitated by researchers through the use of physical artifacts (e.g. brainstorming cards). Yet, since the COVID-19 pandemic and the start of work-from-home (WFH) policies, we are now used to hybrid modalities of interaction that heavily leverage digital tools (Zoom, Miro, Teams..). As a consequence, co-design workshops also moved to the digital domain. In this task, we are interested in investigating how to adapt traditional co-design spaces, methods, and toolkits to the hybrid medium and how to rethink interaction among participants. This task will start with performing a literature review on existing work, drafting a simple framework to understand and compare different co-design strategies; and continue developing prototypes of hybrid toolkits. Examples of hybrid toolkits will be provided as case studies.
ICT for Health & Well-being in Built Environments
This project will explore how ICT could contribute to sustainable built environments that support better health and well-being of their occupants. The work will be conducted within the SWELL project: https://www.ntnu.edu/sustainability/swell.
The tasks will include:- A literature review of how ICT could contribute to health and well-being in sustainable built environments.- A literature review of relevant interactive and ubiquitous digital technologies.- Design and prototype of a solutions to engage users of buildings, or other physical spaces.- Evaluation of the prototype.
The aim of the project is to implement and evaluate a global value numbering transformation in the JLM compiler.
Conventional imperative language compilers represent programs internally as static single assignment (SSA) form within a control flow graph (CFG). Although this intermediate representation (IR) is the dominant representation for imperative programs, it bears several drawbacks, such as the SSA maintenance cost, loop (re-)discovery, and the regular loss of important invariants throughout compilation [3]. In contrast, the Regionalized Value State Dependence Graph (RVSDG) is a compiler IR actively developed at NTNU that represents control- and data-flow in one unified representation, avoiding many of the CFGs drawbacks. It is a data-flow centric IR where nodes represent computations, edges represent computational dependencies, and regions capture the hierarchical structure of programs. It represents programs in demand-dependent form, implicitly supports structured control flow, and models entire programs within a single IR. Partial redundancy elimination (PRE) is a compiler transformation that determines when subexpressions are redundant on some, but not necessarily all paths through the program, and eliminates them. It performs a form of common subexpression elimination as well as loop invariant code motion, and for recent formulations based on IRs in SSA form also unifies PRE with global value numbering.
Currently, the RVSDG is implemented in the JLM compiler [1]. The aim of this project is to add a partial redundancy elimination transformation to JLM and evaluate the implementation against the already existing common node elimination transformation. As this project uses cutting edge compiler research tools, a good understanding of compilers and C++ is required. JLM utilizes the LLVM infrastructure, which is commonly used in both commercial and research compilers. This makes this project highly relevant if you are interested in working with compilers in the future.
More specifically, the goal of the project is the following:
[1] JLM: A research compiler based on the RVSDG IR, March 2025. https://github.com/phate/jlm.
[2] Karthik Gargi. A sparse algorithm for predicated global value numbering. In Proceedings of the ACM SIGPLAN 2002 conference on Programming language design and implementation, PLDI ’02, pages 45–56, 2002.
[3] Nico Reissmann, Jan Christian Meyer, Helge Bahmann, and Magnus Själander. RVSDG: An intermediate representation for optimizing compilers. ACM Transactions on Embedded Computing Systems, 19:49:1–49:28, December 2020.
[4] Reshma Roy, Sreekala S, and Vineeth Paleri. Partial Redundancy Elimination in Two Iterative Data Flow Analyses. In 38th European Conference on Object-Oriented Programming (ECOOP 2024), volume 313 of Leibniz International Proceedings in Informatics (LIPIcs), pages 35:1–35:19, 2024. ISSN: 1868-8969.
[5] Thomas VanDrunen and Antony L. Hosking. Value-Based Partial Redundancy Elimination. In Compiler Construction, pages 167–184, 2004.
The aim of the project is to implement and evaluate a scalar evolution analysis in the JLM compiler.
Conventional imperative language compilers represent programs internally as static single assignment (SSA) form within a control flow graph (CFG). Although this intermediate representation (IR) is the dominant representation for imperative programs, it bears several drawbacks, such as the SSA maintenance cost, loop (re-)discovery, and the regular loss of important invariants throughout compilation [6]. In contrast, the Regionalized Value State Dependence Graph (RVSDG) isa compiler IR actively developed at NTNU that represents control- and data-flow in one unified representation, avoiding many of the CFGs drawbacks. It is a data-flow centric IR where nodes represent computations, edges represent computational dependencies, and regions capture the hierarchical structure of programs. It represents programs in demand-dependent form, implicitly supports structured control flow, and models entire programs within a single IR.
Scalar Evolution is a compiler analysis that looks at the change in the value of scalar variables over iterations of a loop. The analysis provides facts about induction variables that are utilized in other loop transformations and simplifications, such as loop strength reduction or loop invariant code motion, to improve the quality of the loop code based on these facts.
Currently, the RVSDG is implemented in the JLM compiler [1]. The aim of this project is to add a scalar evolution analysis to JLM and evaluate the implementation utilizing loop simplifications and transformations. As this project uses cutting edge compiler research tools, a good understanding of compilers and C++ is required. JLM utilizes the LLVM infrastructure, which is commonly used in both commercial and research compilers. This makes this project highly relevant if you are interested in working with compilers in the future. More specifically, the goal of the project is the following:
[2] Olaf Bachmann, Paul S. Wang, and Eugene V. Zima. Chains of recurrences—a method to expedite the evaluation of closed-form functions. In Proceedings of the International Symposium on Symbolic and Algebraic Computation, 1994. [Online]. Available: https://doi.org/10.1145/190347.190423
[3] Johnnie L Birch. Using the chains of recurrences algebra for data dependence testing and induction variable substitution. PhD thesis, Florida State University, 2002.
[4] Robert Engelen. Symbolic evaluation of chains of recurrences for loop optimization. 2000.
[5] Robert van Engelen. Efficient symbolic analysis for optimizing compilers. In Proceedings of the International Conference on Compiler Construction, 2001.
[6] Nico Reissmann, Jan Christian Meyer, Helge Bahmann, and Magnus Själander. RVSDG: An intermediate representation for optimizing compilers. ACM Transactions on Embedded Computing Systems, 19:49:1–49:28, December 2020. [Online]. Available: https://doi.org/10.1145/3391902
[7] Eugene V. Zima. On computational properties of chains of recurrences. In Proceedings of the 2001 International Symposium on Symbolic and Algebraic Computation, 2001. Online]. Available:https://doi.org/10.1145/384101.384148
Health literacy—the ability to access, understand, and apply health information—is crucial for making informed health decisions. However, many individuals struggle with low health literacy, leading to poor health outcomes. Traditional health education methods often fail to engage audiences effectively. Gamification, the use of game design elements in non-game contexts, has emerged as a promising strategy to enhance learning and engagement. This research aims to explore how gamification can be effectively integrated into health education to improve health literacy. The study will focus on designing and evaluating a gamified learning system that encourages users to acquire, retain, and apply health-related knowledge.
While conversational AI and even image & video analysis enjoy widespread use, generative 3D is still nascent, and models and approaches are more experimental. This thesis will investigate different approaches both in-memory and hosted via API use regarding their applicability to support scene composition for a variety of educational scenarios. Aim is to create and evaluate a proof-of-concept in combination of Python back-end and Unity3D front-end app, integrated into the existing source-code projects MirageXR (Unity3D) and lxr (Python, Django) to benefit from existing development.
With this master thesis project, you will:* Design and develop an architecture for generating 3D models from prompts and managing API communication for submitting jobs and retrieving generated 3D models* Assess its feasibility and evaluate efficacy with a small-scale user experiment* Investigate potential educational usage scenarios (e.g. medical simulation, language learning, creative writing)
Outline solution:* submit prompt to web service to initiate 3D generation* monitor whether generation process has finished* download 3D artefact and display in MirageXR
Additional informationAim is to interface this service with MirageXR (https://github.com/WEKIT-ECS/MIRAGE-XR/), the AR learning experience editor and player, to support creation of 3D learning content based on user description. This can be used to, e.g., create props and objects required in XR learning activities.
How to:* Here is an example API: https://api-documentation.blockadelabs.com/api/* Here is an alternative (using single-shot image input): https://github.com/VAST-AI-Research/TripoSR* And the Shap-E implementation: https://github.com/openai/shap-e/tree/main?tab=readme-ov-file* And 4dfy: https://sherwinbahmani.github.io/4dfy/ and https://sherwinbahmani.github.io/4dfy/
ContextThe students will have access to a very well-equipped IMTEL VR lab (https://www.ntnu.edu/imtel/) containing various modern AR and VR devices and laptops.The target devices for the project are Apple Vision Pro, HoloLens 2, and Oculus Quests 3.
Main contactFor any questions about the task, please, contact Mikhail Fominykh mikhail.fominykh@ntnu.no.
SupervisorsProf Dr Monica Divitini, Professor at the Department of Computer Science, NTNUDr Mikhail Fominykh, Researcher at the Department of Education and Lifelong Learning, NTNUProf Dr Fridolin Wild, Professor AR/VR at the Open University, United Kingdom
Emerging technologies such as virtual/augmented reality/extended reality (VR/AR/XR) and generative AI such as ChatGPT, Midjourney and Magic3D are already revolutionizing how we live and work. XR has already demonstrated significant potential in transforming educational practices by providing learners with realistic and highly engaging learning experiences. Generative AI is a powerful tool that can be used to quickly and efficiently create a wide range of educational content, including human-like text, videos, images, and even 3D models and software code. The goal of this master project to investigate if the combination of these technologies can contribute to creating innovative education tools for NTNU teachers and students.
There are 2 possible research questions in this project:1. Development and evaluation of virtual classrooms and learning areas, populated by virtual humans/ teaching assistants powered by ChatGPT or similar chatbots (in collaboration with NTNU teachers, using existing ChatGPT plugins). These teaching assistants will be able to interact with students 24/7, answering their questions and providing assistance and personalized feedback. 2. Explore how generative AI (Magic3D or similar) can be used to rapidly create 3D educational content for use in such virtual classrooms by teachers/students without prior programming experience. Previous studies among teachers at NTNU and similar international studies identified difficulties with generation of educational content as one of the major obstacles to wider adoption of XR among educators.
The students will have access to a very well-equipped IMTEL VR lab (https://www.ntnu.edu/imtel/) containing Valve Index, HTC Vive/Vives Pros, Vive Cosmos, 2 Magic Leaps, several Hololenses 1 and 2, Mixed Reality headsets, Oculus Quests, Oculus Rifts, VR treadmill Virtuix Omni, VR laptops etc. A significant number of the VR/AR equipment is portable and can be used at home shall the pandemic situation and campus closure be repeated.
Supervisors: Monica Divitini, Ekaterina Prasolova-Førland (ekaterip@ntnu.no) & Mikhail Fominykh!!!PLEASE CONTACT Prof. Prasolova-Førland for more information about the task!!!
Immersive technologies such as virtual/augmented/extended reality (VR/AR/XR) have demonstrated significant potential in transforming educational practices by providing learners with realistic and highly engaging learning experiences. In most cases, due to budget and practical concerns, educators use relatively unexpensive XR equipment such as Oculus Quest. While this might be sufficient for many educational situations, it is important to investigate the potentials of more advanced equipment that provides advanced spatial computing possibilities, simulates senses other that sight and hearing and facilitates walking.
The goal of this project is to explore how advanced XR technology, beyond the regular XR equipment, could support learning, especially at university and professional level. The specific topic of the project will be defined in collaboration with the student depending on the choice of equipment. Here are examples of possible projects:
The students will have access to a very well-equipped IMTEL VR lab (https://www.ntnu.edu/imtel/) containing Apple Vision Pro, Valve Index, HTC Vive/Vives Pros, Vive Cosmos, 2 Magic Leaps, several Hololenses 1 and 2, Mixed Reality headsets, Oculus Quests, Oculus Rifts, VR treadmill Virtuix Omni & Cyberith Virtualizer, BHaptics suit & gloves, VR laptops etc. A significant number of the VR/AR equipment is portable and can be used at home.
The job market, in Norway and internationally, has changed considerably over the past few years due to the COVID-19 pandemic and the emerging AI technologies, raising the need for developing innovative methods for workplace training and career guidance. In this project we will investigate how the use of Virtual Reality technologies and gaming elements can 1) motivate and inform young job seekers on their way to work and 2) contribute to faster skill acquisition for new employees. Through the simulation of a workplace or an industry (e.g. aquaculture or a shipyard), the job seekers can immerse into different workplaces and try out typical tasks, for example, salmon feeding or welding in a safe setting, thus mastering the corresponding real world situation.The master project will be performed in collaboration with Erasmus+ VR4VET project (Virtual Reality for Vocational Education and Training, https://vr4vet.eu/) involving several partners in Norway, Germany and Netherlands. The project proposes a new approach to vocational training and career guidance applying VR to allow active and engaging exploration of professions and introductory training, involving job seekers, career counsellors and industry stakeholders all over Europe. The student(s) will work in close collaboration with NAV, local industries (especially maritime) and our European partners (TU Delft and TH Köln). VR4VET is a continuation of Virtual Internship project that has so far resulted in several prototypes for workplace training and job interview training in VR and received international recognition (e.g. Best Demo Award at EuroVR 2018 and Breakthrough Auggie Award finalist) and broad media coverage https://memu.no/artikler/gir-ungdom-en-virtuell-jobbsmak/, https://www.ntnu.edu/imtel/virtual-internship.
Objective: To design and integrate AI-powered chatbots into simulations to provide real-time scaffolding and analyze their effectiveness in addressing learner challenges and improving outcomes.
Description: This research focuses on embedding AI-driven chatbots into Articulate Storyline 360 simulations to provide real-time scaffolding and support for learners. The study will evaluate the effectiveness of these chatbots in addressing common learner challenges, such as navigating complex tasks or understanding difficult concepts. By analyzing learner interactions and outcomes, the research will offer insights into the potential of conversational AI to personalize and enhance online learning experiences.
This master assignment will apply Large Language Models (LLM) in the analysis and documentation of Systems Engineering tasks for developing Air Traffic Management Systems (ATMS). It is relevant to air traffic control and technology development in multiple countries. The project is in collaboration with Avinor, within the iTEC SkyNex context.
Your benefit – Artificial Intelligence and/or Systems Engineering in a European project
Would you like your master thesis to be relevant to air traffic control and technology development in United Kingdom, Netherlands, Germany, Spain, Canada, Poland, Lithuania and Norway? With this master assignment you will take part in the largest software development project for ATMS in the world. Your contacts network can expand to Europe and Canada, and open career opportunities in Norway, Canada and Europe.
Project background – ATMS and the Systems Engineering Complex – iTECSkyNex.com
When developing an ATMS, the Systems Engineering is one of the major efforts and costs. About 80% of the Systems Engineering involves analysing and documenting the system, whereas the remaining 20% is the implementation, typical coding the software. Whereas much effort is made to investigate and use artificial intelligence for generating code these days, less time is spent on the 80% of Systems Engineering for analysing and documenting systems.
When analysing and documenting an ATMS, domain knowledge needs to be elicited, analysed and understood. Many stakeholders are involved in these tasks and domain knowledge needs to be transformed and understood by typical domain and subject matter experts, requirements, system and software engineers, to develop the system. The tasks of the engineers involve creating specifications, from the overall system descriptions, breaking them down into system architecture and design descriptions, and further down into descriptions of the software (also architecture and design). As specifications must be understood by many stakeholders, the use of natural language requirements (NLR) in combination with diagrams at various abstraction levels is common. Decomposing and tracing the information into more details is a key activity, together with the verification and validation of the information and the ATMS being developed.
Structure of master assignment
Main research questions:
Methods:
Delivery and expected results:
Specific knowledge/competence/skills for this assignment:
This project will be co-supervised by Leonardo Montecchi and Jingyue Li
As educational institutions adopt various digital learning platforms, seamless interoperability and data integration become essential for enabling effective learning analytics (LA). This thesis explores interoperability frameworks such as the Experience API (xAPI) and Learning Tools Interoperability (LTI) within Norway’s learning ecosystem, explicitly focusing on systems like FS, Canvas, and other data flows managed by Sikt.
The aim is to develop a solution that accesses and aggregates learning data across these systems via available APIs, offering stakeholders meaningful insights through a dashboard or analytics tool.
Thesis Description
The thesis starts with a literature review covering learning analytics, xAPI, LTI standards, and educational data interoperability. Students will then design and implement a prototype system capable of:
The system will serve teachers, students, and administrators, providing a unified view of learning progress, resource usage, and interaction patterns.
Candidates should be comfortable with software development and interested in educational technology and data science. Required skills include:
Create an aquarium (freshwater) that can be monitored via a highly usable web app. Allow users to monitor and interact with the aquarium remotely via a series of sensors and actuators.
The project involves a study of relevant existing research and literature, designing, implementing, and evaluating prototypes (IoT + software), and planning and conducting a series of user tests.
Create a high-tech garden bed that can be monitored via a highly usable web app. Allow users to monitor and interact with the garden bed remotely via a series of sensors and actuators.
Supervisors: Michail GiannakosPlace: LCI Lab: https://lci.idi.ntnu.no/Suitable for: One or two students
IntroductionLearning analytics (LA) and AI in education (AIED) have been hot topics in educational communities, organizations and institutions. There are four essential elements involved in all LA and AIED processes: data, analysis, report and action.
Learning analytics are important because every “trace” within an electronic learning environment may be valuable information that can be tracked, analyzed, and combined with external learner data; every simple or more complex action within such environments provides insights that can guide decision-making (e.g.,, students, teachers, policymakers).
Thesis DescriptionThe increased need to inform decisions and take actions based on data, points out the significance of understanding and adopting LA and AIED in everyday educational practice. To treat educational data in a respectful and protected manner, the policies for LA play a major role and need to be explicitly clarified. This thesis will analyse data associated with the use of LA and AI learning systems in Norway, and has the option to also collect primary data (e.g., questionnaires or interviews with students and lecturers), with an ultimate goal to identify what LA and AIED systems are used in Norway, how they are put into practice and potential challenges and opportunities with their us.
RequirementsThe ideal candidate will have a background and interest in data analysis and research methods, no programming skills are required.
Relevant informationThe ideal candidate will have a background in data analysis, no programming skills are required.- The candidate can use data that have been collected in the context of the national expert group in learning analytics: https://laringsanalyse.no/- The candidate can use data from different national organizations such as Sikt and NOKUT.
See the complete topic as PDF: https://drive.google.com/file/d/1wv5l2eok3LLfTuGucurgHEJ7Z9RSztVc/view?usp=sharing
Deployment of new technological infrastructures such as platforms and AI requires new skills to be learned. However, it is not easy for busy practitioners to attend classes as students do. Many people use online resources such as YouTube and social media to keep updated, but this learning is seldom done systematically. We need new pedagogical models to keep updated on the job. We want you to find and design learning models for busy people who need to keep their skills updated all the time.
This is an empirical research projects. You will need to create empirical knowledge about new learning models through methods such as co-design and case studies. Your results will include new knowledge, but also models and design ideas for new learning models and tools.
This task requires that you have a good understanding of, and are interested, in empirical qualitative research. Working language for this task is English. The thesis can be written in Norwegian or English but we recommend English. Please contact Babak before you select this task.
Mobile phones are actively used to access the weather and weather forecast information. For example, the Norwegian Meteorological Institute maintains access to its weather services and data in the app Yr. Much less attention has been paid for presentation of climate information, specifically when local climate information is to be considered. The climate information, however, plays an important role in human decision-making, guiding agricultural and construction activity and long-term planning.
The objective of this project is to design such a climate information app for mobile phones.
Deliverables. The app shall overlap the static geographical and geomorphological information (from, e.g., Open Street Map) with ecological and climate information from high-resolution online datasets. The datasets are provided by, e.g., the ESA CCI COPERNICUS projects. For ecological information, one may take ESA CCI for land cover or CORINA dataset. For climate information, E-OBS or gridded high-resolution land surface temperature information will fit. The final set of climate and geographical information data sources will depend on the design of the app and must be specified in a dialog with the supervisor.
This work will be conducted within the International project URSA MAOR: https://www.sintef.no/en/projects/2021/urban-sustainability-in-action-multi-disciplinary-approach-through-jointly-organized-research-schools/
Læringsteknologi er programvare og andre teknologiske produkter som understøtter læring og undervisning. Her er det mulighet for selvvalgte oppgaver enten fra studenter eller studenter i samarbeid med fagstab, og prosjekter som kan relateres til enten Excited senter for fremragende utdanning.
The project aims to study various aspects of learning to program using biometric sensors such as EEG (brain activity), eye tracking (gaze and attention), and GSR (galvanic skin response) sensors. Potential scenarios could be comparing tasks with and without AI assistance for example.
The project involves a study of relevant existing research and literature, planning and conducting a series of user tests. Furthermore, it is expected that such a test will generate a wealth of data to be analyzed and interpreted to draw out interesting and useful results and conclusions. Depending on the case it might be necessary that the students develop data processing scripts and or novel visualizations.
Co-supervisor: George Adrian Stoica
Mental health and wellbeing are increasingly recognized as critical factors in student success, particularly in computing education, where high workloads, imposter syndrome, and performance pressure contribute to stress and burnout. This thesis aims to explore the challenges related to mental health in computing education and identify strategies to support student wellbeing.
Dette er et prosjekt innen informatikkdidaktikk som går ut på å gjøre følgeforskning på et undervisningsopplegg som benytter mestringslæring i Intro programmering, potensielt både med kvantitative og kvalitative metoder.
Prosjektet er forbeholdt Joakim Pettersen Vassbakk.
Architectural design of floor plans is a time consuming and labor-intensive task. Computer-aided architectural design can ease this work though automatically generated floor plans for office buildings can advance the research field of computer-aided architectural design.
The generation of such floor plans need to be feasible wrt architectural constraints.
Example previous project:
VI har en avtale med Arealize (https://www.arealize.ai/) i Trondheim om en enkel samarbeid på prosjektet. Arealize har arkitekts som er villig til å bidra med råd og noe testing/interaksjon med en simulaltor, ved behov. I tillegg har Arealize noe database for innredning som kan og slåes sammen med areal planlegging eller kan utvikles som en adskilt prosjekt om innredning av en floor plan.
En industri kontrakt ønskes ikke av Arealize. Da er studentene fri til å utvikle prosjekt i den retningen som ønskes men med mulighet for noe profesjonal råd fra Arealize, ved behov.
https://www.arealize.ai/
Studentene er fritt til å velge sin egen vri på tema inkluderte å bruke et annet biologisk inspirerte algoritme.
Merk at prosjekt er og knyttet til prosjekt tema innen Decision making for Multiobjective optimisation pga teknikk og behov.
Game development is a large well-known area in traditional web development. However, it is still to be seen how the emerging web3 technology will take it a step further!
In this project, a multiplayer game will be developed using a mix of traditional web and modern web3 technologies! It will be an excellent opportunity for students to learn about new technologies and possibly apply those later in their thesis or in their further careers.
The game that will be developed will be a multiplayer Blackjack game unless there is a better proposal from students! Technically, it will have following components
The game will have following basic functionalities
Research Aspects
There is increasing awareness in society and in the scientific field regarding the downsides of children’s sustained engagement with screen-based systems.
Multisensory capabilities such as recognizing users’ body motions and gestures (e.g., using depth sensors, accelerometers, and gyroscopes), positioning recognition (e.g., using traditional radio-frequency short-to-long distance identification such as NFC or RFID), and speech recognition are now allowing children to interact in a more natural and multimodal manner. Generative AI (GenAI) creates new content and allows us to deliver tailored feedback and recommendations to children (e.g., through conversational interactions). Multisensory GenAI will have to function alongside other activities.
The aim of this project is to develop a paradigm of non-visual mixed reality
We wish to explore interactivity, where users engage with their full senses in the physical and social environment surrounding them while also interacting with virtual interactive elements overlayed to the physical space that does not rely on visuals. The intended benefits of such interaction are:
Through interaction design experiments conducted in various settings, we shall seek an optimal combination of haptics, audio, etc, to augment/better support children. Through elicitation studies, we shall seek to identify common requirements and adaptive solutions that enable natural and intuitive interaction.
As an example of the kind of interactivity we aim for, consider children playing a computer game where virtual game objects are spread in the physical space. These objects are to be experienced through ‘magic-wand’ like handheld devices and wearables that provide haptic, sound, and light feedback in response to movement and physical actions of players. Similar interactions can support learning and communication applications.
The goal of this thesis is to produce knowledge about the state of the practice in Norway about development of AI intensive systems.
The paradigm of research will be empirical software engineering. The student(s) will analyze the material produced by the groups in TDT4290 and study how the different companies relate to AI and which are the trends. The students will run literature review (in the Autumn), interviews of customers and students, report analysis, and analysis of software.
Patón-Romero, J. David, Ricardo Vinuesa, Letizia Jaccheri, and Maria Teresa Baldassarre. "State of gender equality in and by artificial intelligence." IADIS International Journal on Computer Science and Information Systems 17, no. 2 (2022): 31-48.
Denne oppgaven tilbys i samarbeid med MIA Health
I dag er MIA stort sett aktivitet (i form av puls og PAI) knyttet til forebygging, ink. muligheten for å koble seg til ulike wearables som smart-klokker og helse-monitorer, og det som trengs av frontend/backend og user-adm for å håndtere dette.
Kunne du tenke deg å være en del av et dedikert team som jobber i retning av en tvilling som passer på deg og hjelper deg gjennom livet? Noen forslag til problemstillinger som det går an å se på:
Forslagene over er akkurat det, dvs. forslag. Og kompetanse fra de fleste av IDI sine studieretninger vil være av interesse i denne konteksten. I et konkret prosjekt samarbeid etterstreber vi alltid at studenten er en del av diskusjonen og en del av forprosjektet vil bestå i å sammen spikre en master-oppgave som studenten virkelig er motivert til å jobbe med. Hvis ønskelig kan vi også avtale et møte med MIA før deadline for prosjekt ønsker går ut.
Progresso (previously ProTus - https://protus.idi.ntnu.no/, username: testUSN@usn.no, password: test) is an evolving online learning platform designed to provide learners with personalized courses across multiple domains. Over the years, different versions of the system have been developed, gradually expanding its capabilities to include new content, analytics, and personalization features.
The current iteration of Progresso offers courses in Java, integrating interactive third-party materials while tracking learner interactions and providing learning analytics. However, as learning methodologies evolve, there is a need to incorporate Self-Regulated Learning (SRL) principles to improve student engagement and autonomy.
This thesis will focus on the further development of the platform by implementing a Self-Regulated Learning module that supports students in setting learning goals, monitoring progress, and adapting their strategies based on feedback. Additionally, a user study will be conducted to evaluate the effectiveness of this approach in enhancing learning outcomes.
The project consists of the following phases:
The ideal candidates will have a background in software design, solid programming skills and an interest in hands-on development and experimentation.
Open data involves the pooling and collecting of data across a community, industry or group of stakeholders. The motivation is the vision (aspiration, hope, belief...) that by making data openly availble, hence accessible to everyone, this will boost productivity through enhanced collaboration or create more well-functioning markets. Examples include: Open Target in pharmaceutical industry, the EU's PSD2 regulative towards open banking in finance, or HUNT research database at NTNU.
Visions of the role of open data to are widespread as illustrated by this recent Stortingsmelding, https://www.regjeringen.no/no/dokumenter/meld.-st.-22-20202021/id2841118/?ch=5
The challenge, however, is that the mere availability of open data is not sufficient for its uptake and use towards collaboration. There are social, practical and institutional conditions that need to be in place for visions of open data to materialize.
The student(s) will analyse this for a particular proposal for open data, Open Data Subsurface Universe (https://osduforum.org/). This is a data platform for sharing, communicating and doing analytics of data. It orgininated and has a foothold in the fossil energy sector, but is moving into renewable energy and CCS installations too as OSDU is a general framework for capturing any physical, geo-located asset (similar to Digital Twins).
The students will do their projects with partner companies. Presently there are two: Equinor and AkerBP.
Work on an interesting project related to orientation sensing detection (device relative to the user) in order to provide accurate audio instructions to blind people, for example.
The project involves a study of relevant existing research and literature, designing, implementing, and evaluating prototypes, and planning and conducting a series of user tests.
Many patients need personalized training videos to perform rehabilitation at home. The current training videos from therapists are standardized, and do not fit individual needs.
This project aims to use sensor data and Generative AI technologies to generate personalized training videos based on the exercises the patients practice during the clinical investigation.
The research questions are:
The project will be co-supervised by Prof. Frank Lindseth and Associated Prof. Gabriel Hanssen Kiss.
Several models can be used to find out how users’ social media networks, behaviour and language are related to their ethical practices and personalities, Such models include Schwartz’ values and ethics model and Goldberg's Big 5 model that defines personality traits such as openness, conscientiousness, extraversion, agreeableness and neuroticism. The thesis project would investigate applying such models to social media text and how the user personalties are reflected by the social networks that they participate in and develop.
This project is linked to an EU project that deals with climate change and its effect on biodiversity in Sea. The partner company in this project, Synplan (Oslo based start up, https://www.synplan.ai) will co-supervise this thesis.
In the EU, nearly half of the population lives less than 50 km from the sea. These coastal populations are continuously growing, increasing anthropogenic pressures on marine ecosystems. Predicting spatial and temporal biodiversity dynamics—an essential component of mitigation strategies against human and climate change-related impacts—has become increasingly urgent and vital.
This master’s thesis focuses on AI-based methods (CNNs, Transfer learning, Resnet, VGG...) for curating datasets necessary for such predictions. Specifically, it involves the use of images of marine species collected using both low-cost tools (e.g., Planktoscope, Lamprey-MultiBarcodeTools) and high-end instruments (e.g., FlowCam, Cytosense), as well as satellite data. These images need to be labeled—i.e., the species of phytoplankton present in the images must be identified. While scientists currently label some of the data manually, the process is time-consuming and labor-intensive. To scale up the dataset, there is a clear need for AI/ML-based image recognition methods. The project may also involve integrating data from multiple sources, such as frugal and high-end instruments, to enhance the robustness and applicability of the models.
The project fits to one or two students.
Many regular maintenance operations occur over the lifetime of a commercial building. This includes for example replacement of air filters which filter the air supplied into a building. Short maintenance cycles stay on the safe side by replacing filters too often before any efficiency loss or down-time occurs. This may lead to time and material consuming replacements before they are actually necessary.
In an initial step, promising regular maintenance operations for automated prediction need to be identified and ranked based on their economic impact.
The goal of this thesis is to develop predictive maintenance methods for one or multiple of the identified operations in order to reliably detect the need for replacement or maintenance before a problem occurs.
This project is in collaboration with Piscada, a Trondheim-based technology company that develops an industrial cloud-based software platform for customers in construction and energy (PropTech), Industrial IoT, aquaculture, and general process management. The company was established in 2009 as a spin-off from SINTEF and focuses on innovation and simplification of industrial IT systems, as well as building a bridge between industrial automation and IT. There are today approximately 2,000 installations of Piscada's software and a diverse list of renown customers. We aim to be a leading industrial service platform with a focus on effective monitoring, new insights and optimization for increased sustainability in selected industries.
Many regular maintenance operations occur over the lifetime of a fish farm. This includes for example cleaning of the feeding mechanism or the tubes through which the feed is distributed to the fish-nets. Short maintenance cycles stay on the safe side by cleaning too often before any down-time or damage occurs. This may lead to time-consuming cleaning before it is actually necessary. Many fish-farm operators develop a good intuition for when a cleaning cycle is necessary, but this is not easily reproducible or transferable across employees.
In an initial step, promising regular maintenance operations for automated prediction need to be identified and ranked based on their economic impact. The goal of this thesis is to develop predictive maintenance methods for one or multiple of the identified operations in order to reliably detect the need for maintenance before a problem occurs.
This project is in colalboration with Piscada, a Trondheim-based technology company that develops an industrial cloud-based software platform for customers in construction and energy (PropTech), Industrial IoT, aquaculture, and general process management. The company was established in 2009 as a spin-off from SINTEF and focuses on innovation and simplification of industrial IT systems, as well as building a bridge between industrial automation and IT. There are today approximately 2,000 installations of Piscada's software and a diverse list of renown customers. We aim to be a leading industrial service platform with a focus on effective monitoring, new insights and optimization for increased sustainability in selected industries.
As per www.regjeringen.no, zoning plans specify the use, conservation and design of specific geographical locations. They consist of detailed land-use plan maps that are coupled with a planning provision and plan description. When looking to start a construction process in a given area, reviewing the corresponding zoning plan is essential. This is where one can find information regarding factors such as where in the area buildings can be placed, as well as certain characteristics (ex: height, roof style) the buildings must abide to. Accessing and understanding the zoning plans, however, can be a complex and time-consuming process for citizens, developers, and even case workers. Therefore, citizens and developers often rely on contacting municipal offices directly for explanations and guidance, which can be inefficient and time-consuming for both parties. It is therefore in the best interest of the municipalities of Norway that a solution for easy retrieval of information from zoning plans is developed.One such solution, “Planslurpen,” is part of DiBKs “Drømmeplan”-project, and the end goal is for it to be a national component available to everyone. It uses machine learning methods to retrieve key information from zoning plans and presents it in a manner that allows one to easily find which regulations apply to a chosen area. It is not ready for deployment yet, though. For example, currently, the plan-id and plan description must be manually specified and uploaded, which would not be ideal in production. High quality data flow and output are key factors in determining the success of Planslurpen.In this project, the students will be working closely with the municipalities of Trondheim and Kristiansand, stakeholders such as DiBK and KS, and the developers of Planslurpen. The project has a high degree of freedom, as the students will assess the needs of all involved parties and contribute to the further development of Planslurpen based on their findings. Potential approaches could include designing a data infrastructure for easy integration of Planslurpen in municipal processes, development of multi-agent AI chatbot functionality, suggestions for improvement of the Planslurpen API, or researching methods to improve Planslurpens retrieval and presentation of zoning plan details.Throughout the project period, the students will have access to expert competence in the field of zoning plan case handling from the municipalities of Trondheim and Kristiansand, for informative and testing purposes. They will also be working with DiBK, KS and the developers of Planslurpen. The students will have access to raw data from the municipal zoning plan registries for the Trondheim and Kristiansand municipalities, which consists of several thousands data points. Data will also possibly include the data used to train Planslurpen, although this is yet to be confirmed. It will likely be confirmed by the end of March.
Project thesis outline and objectives: Develop an understanding of the problem space Discern the needs of involved parties Evaluate the current Planslurpen architecture and data flow Explore potential approaches Literature review covering state-of-the-art methodsExample objectives for master’s thesis: Development implementation of multi-agentic AI architecture for a zoning plan chatbot Proof-of-concept implementation of AI-friendly Planslurpen API optimizations Development of scalable and interoperable architecture for integration into other municipalities Evaluation of proposed ideas through continuous dialogue with stakeholders Development and implementation of methods to improve Planslurpen Increasing user trustworthiness of Planslurpen through explainability
Video:
DNV is currently leading a project under the auspices of ESA (European Space Agency) that focuses on the use of satellite data within shipping in the Arctic and Baltic Sea regions. The project aims to identify the needs for various types of satellite data, which services and products currently offering this, the extent and in which manner the satellite data is being used, and similar aspects. The current work on this project is published as reports on https://earsc-portal.eu/display/EO4BAS. The EO4BAS project is part of a larger project within EO data (Earth Observation, i.e., satellite data) financed by ESA and EC (European Commission). Not only opportunities within the maritime are explored, but also within ex. oil and gas, and raw material extraction.
While many students use generative AI tools such as Microsoft Copilot, we have little knowledge of how these tools are used, for what, and how they affect student learning. In this project, you study the use of Microsoft Copilot by computing students at NTNU using qualitative research methods to gain a rich understanding of the phenomenon. You will do initial literature studies on the topic and design a case study with data-collection methods like observations, interviews, and archival data. You will analyze the collected data qualitatively to explaining the practices of computing students using Copilot.
The gold standard for computer graphics is and always has been the simulation of real light dynamics through ray-tracing, but due to the high demands on compute it has seen little use in real time applications. In recent time ray traced real-time graphics has become a reality, with the last year even bringing ray-tracing capabilities to mobile devices. The advances that made this possible are several, including better process nodes for silicon, advances in neural network based denoising, novel temporal antialiasing techniques and improvements in Bounding Volume Hierarchy (BVH) construction. Ray tracing relies heavily on specialized data structures to make the intersection test between the ray and scene efficient using some kind of Acceleration Structure, with the standard approach being the use of a BVH. The BVH is a spatial data structure, and lends well to GPU warp-based execution because rays issued from nearby pixels (and scheduled on the same shader core) are likely to traverse the same part of the tree. This allows the tree nodes to be re-used for multiple threads of execution saving an order of magnitude in bandwidth. The difference between a well- and poorly constructed BVH can account for more than 50% of the ray traversal performance making quality a very sensitive topic. Another issue is that higher quality build algorithms naturally require more time, to the point where building the BVH takes too long to be feasible in a real-time environment. Due to this tradeoff between traversal performance and build time the field of acceleration structure construction is wide open and there are multiple heuristics that can be applied in attempt to get ahead in one way or another. The complexity of the problem is further increased by the fact that different hardware accelerators have different performance characteristics, meaning the same algorithm may not be the best everywhere. This all means that the construction of BVHes is not in general well understood, and there is ample room for innovation.
For the Fall project we would have you implement a standard Surface Area Heuristic (SAH) or Linear BVH (LBVH) algorithm and insert the resulting BVH in ARM’s hardware to evaluate performance. The initial implementation should be on CPU since it is much easier to program, but depending on how it goes you are encouraged to implement a build algorithm using Vulkan Compute as well. The goal is for you to have a solid base (and test pipeline) ready for your Masters thesis project, where you will have the opportunity to explore the various ways in which the standard algorithms can be improved upon. The specific direction here is up to the student. The algorithms that are implemented can be evaluated on their build time, their memory footprint and on their effect on the framerate of sample content. Several competing algorithms have implementations available on github and can be used for comparison.
Suitable for: 1 studentSupervisors: Theoharis Theoharis, NTNU, Torbjörn Nilsson, ARMRequirements: Computer Graphics courses (see below), Knowledge of C++, OpenGL, interest in learning VulkanCourses: TDT4195 (Visual Computing Fundamentals), TDT4230 (Graphics & Visualization), or equivalent.
Literature:1. https://jacco.ompf2.com/2022/04/13/how-to-build-a-bvh-part-1-basics/2. http://www-sop.inria.fr/members/Stefan.Popov/media/KDTConstructionGPU_TR10.pdf3. https://www.nvidia.in/docs/IO/77714/sbvh.pdf4. https://meistdan.github.io/publications/ploc/paper.pdf5. https://devblogs.nvidia.com/wp-content/uploads/2012/11/karras2012hpg_paper.pdf6. http://gamma.cs.unc.edu/SATO/SATO_files/sato_preprint.pdf7. https://vulkan-tutorial.com/8. https://research.nvidia.com/sites/default/files/publications/dnn_denoise_author.pdf9. http://behindthepixels.io/assets/files/TemporalAA.pdf
A recent addition to the modeling of scenes is based on 3D Gaussian primitives. The associated rendering technique called Gaussian Splatting:
https://en.wikipedia.org/wiki/Gaussian_splatting
The idea behind this project is to create a 3D Gaussian representation of the inside of Nidaros Cathedral, based on a large set of photographs taken by Torbjørn Hallgren.
Following, novel views and walkthroughs will be generated.
Knowledge: Python, C/C++
Prof. Theoharis Theoharis, IDI, NTNU theotheo@ntnu.no
Approximate computing studies how to provide “good enough” results for a certain application. It is used in different context, for example resource constrained devices or when operating in degraded mode. We want to evaluate the impact of faults on different approximate computing techniques.
Approximate computing [1] is the science that studies how to provide “good enough” results -- according to an application-specific quality metric -- while at the same time improving a certain performance metric such as time-to-solution, energy-to-solution, etc. Many approximate computing techniques exist. In this project, we focus on compiler-level techniques.
Fault injection [2] is a verification technique that deliberately introduces faults into a system, to evaluate their effects. It is often used in the testing of critical systems, to create conditions to test redundancy and recovery mechanisms. Different techniques may be used. In this project we focus on injection at code level and during the compilation process.
While different approximate computing approaches may produce similar results in terms of quality and performance metrics, they may have very different behavior in terms of their ability to cope with faults (fault tolerance). The objective of this project is to apply fault injection to compare different approximate computing algorithms. This project will be based on existing approximate computing algorithms and existing benchmarks. The main task is to extend such benchmarks with a fault tolerance perspective, in which faults are injected to the existing code base.
Producing new furniture is resource-intensive and has a significant environmental impact. By promoting reuse, we can potentially reduce overconsumption and also help lessen the strain on the climate and environment. Therefore, it is crucial to extend the lifespan of furniture as much as possible.
NTNU already offers a reuse service, Gjenbrukstorget, where old furniture can be listed for pickup by staff and students, but there is room for improvement in how reuse is organized. This project aims to explore how a digital solution can enhance accessibility, coordination, and user experience for the Gjenbrukstorget. The goal is to make it easier for more people to participate in reuse, thereby reducing waste and contributing to sustainable solutions at the university.
Relevant research questions include:
To develop a digital solution, this project will use a combination of service design and participatory design. Service design will be employed to map the user journey and identify bottlenecks in the current service. Insights will be gathered through interviews, observations, and workshops with staff, students, maintenance personnel, and administration to understand what works well and what can be improved.Participatory design will be a key approach to ensure that the solution is developed in close collaboration with actual users. Through co-design workshops and prototype testing, stakeholders will actively contribute to shaping the functionality and user experience, ensuring that the solution meets their needs.
This project aims to contribute to a more sustainable practice at NTNU, where furniture reuse becomes not only easier but also more efficient and accessible to more people. The digital solution will make Gjenbrukstorget a central and user-friendly digital service, helping to reduce waste and create a more sustainable university.
The project will be co-supervised by Birgit Rognebakke Krogstie (IDI, NTNU).
Evaluate different object detection and/or trajectory planing algorithm from the safety perspective. Involves experimenting with different ways to evaluate object detectors, and possibly defined new benchmarks. Builds on existing research and a previous Master’s theses at IDI.
Autonomous vehicles rely on object detection as a fundamental way of perceiving the environment. Modular pipelines for autonomous vehicles first acquire data from sensors, perform object detection, create a scene representation, and finally perform motion planning.
Autonomous vehicles are safety-critical systems [1], where “safety” is defined as the absence of catastrophic consequences on the users and the environment. However, machine learning components are typically evaluated with traditional metrics based on precision and recall.
Recent works in the literature have proposed metrics and algorithms for object detection that integrate the concept of safety (e.g., [2]). The objective of this project is to evaluate object detectors from the safety perspective. The task involves a literature review on safety metrics for trajectory planning on autonomous vehicles. This work builds on existing research and on a previous Master’s thesis that has focused on the object detection step [3].
The long-term research objective linked to this activity is to define a methodology that can assess the autonomous vehicles from a safety perspective.
Enhetlig pasientbehandling blant helsearbeidere i helsesektoren undergraves av ulike former for grenser - eografiske, institusjonelle og profesjonelle. Dette er til hinder for effektiv og høykvalitet pasientbehandling. Eksempler inkluderer samarbeide mellom fastlege og sykehus, eller samarbeide mellom sykehus og kommunehelsetjenesten herunder eldreomsorgen.
Trass pådriv og initiativ for å få helsearbeidere til å samarbeide tettere og mer interaktivt, gjenstår mye. Informasjonssystemene i helsesektoren er "silo"-orientert dvs de understøtter primært arbeidet lokalt, ikke samarbeide gjennom behandlingskjeden.
Det har opp gjennom årene vært satt i gang en rekke reformer og tiltak (feks Samhandlingsreformen, En innbygger en journal, Helseplattformen) uten at dette har løst utfordringene.
IKT (digitalisering) blir pekt ut som mulig løsning, gitt kapasitet til støtte distribuerte arbeidsprosesser.
Prosjektet/ oppgaven vil ta for seg et utvalgt innføringsløp for en digital tjeneste i helsesektoren. Oppgaven vil innebære en selvstending, empirisk innhenting av krav gjennom observasjon, intervju og logging av bruk av eksisterende system. Krav/ behov skal så operasjonaliseres i anbefalte, ev også prototypet, funksjonalitet.
DNV ønsker å bruke sensorer for å automatisere klassifisering, redusere manuelle inspeksjoner og sikre kontinuerlig overvåking.
En utfordring er datadeling og standardisering av forskjellige formater av data til forskjellig bruk. Noen skip har sensorer, men deler ikke data, mens andre ønsker standardisering, men mangler teknologi. VISTA Gateway skal samle og behandle sensordata, men krever skalering og støtte for flere dataformater. Maintenance Activity Data (MAD) følger ikke ISO-standarder, noe som gjør standardisering krevende. Oppgaven kan undersøke hvordan ulike standarder kan kombineres, og hvordan sanntids- og batchprosessering kan optimaliseres.
Oppgave kan ta for seg potensialet for virkningen en implementering av sensorbasert klassifisering vil ha, i form av tekniske behov for lagring og prosessering, organisatoriske behov med hensyn til intern og eksterne forhold og bærekraftspåvirkning en implementering av sensorene vil ha.
The project aims to study various aspects of creating a solution that facilitates sharing office/desk use, converting them into “smart” desks or “context-aware” desks.
Software architecture is a critical aspect of designing and developing software systems. Modeling and documenting the software architecture is a fundamental task in software engineering, and established modeling languages (e.g., UML) have been used for this purpose. This project investigates languages and patterns for modeling software architectures that include AI components.
Defining and documenting the software architecture [1] of a system is one of the most important tasks in developing a software system. Different selection and organization of components and patterns [2] have a large impact of non-functional attributes of a system, such as reliability, security, performance, etc. Over time, multiple methods have been defined to guide software architects in the definition of the most appropriate architecture for their system.
One important tool in software architecture specification are modeling languages, and in particular Architecture Description Languages (ADLs). The building blocks of an architectural description are: 1) components, 2) connectors, and 3) architectural configurations. An ADL must provide the means for the explicit specification of those aspects [3].
Traditional examples of ADL include UML, SysML and AADL [4]. Despite having some differences, all these languages have been conceived for traditional software and systems. The objective of this project is to investigate the limitation of those languages for what concerns the modeling of software architectures that include AI components. For example, concepts like AI models, prompt patterns, AI agents, fine tuning, etc., are not explicitly captured by traditional ADL.
The long term objective of this project is to define modeling languages and patterns to specify software architectures that include AI components.
We want to start a research centre (Norwegian Centre of Excellence) that investigates the relation between Software Engineering, Artificial Intelligence, and Intersectionality.
The aim of this thesis is to explore the current practices in Norway in this field and to contribute to the theoretical and practical basis of this exciting and relevant area of research.
Software is used by everyone, but software does not, at present, provide equal rights and opportunities for all. Biases, which are prejudices for or against one person or group, influence both software development and the composition of the software engineering workforce. Software programming is increasingly supported by Artificial Intelligence, and there is a risk that existing biases will be perpetuated and reinforced if they are not properly researched and understood.
Examine how companies incorporate Software Engineering, Artificial Intelligence, and Intersectionality into recruitment and software development processes.
Investigate how software development teams are aware of and relate to Artificial Intelligence and Intersectionality.
In Norway we have a well-developed standard for naming equipment and components in buildings, TFM. However, abroad there is no such standard and many different conventions are created and used.
In this project the student will use our properly labeled (ground truth) time series data to build a model that can classify equipment based on the data they've emitted the past two weeks or less. The student will have to settle on a suitable way to preprocess the time series and type of model to use. Also, perhaps evaluate if the classes should be joined or divided based on clustering. This is challenging because different buildings may have different patterns of operation and setpoints.
The student will either get access to our API to fetch data or a hard drive containing the data. He/she will also have a list of labeled tags (sensors). We have years of high frequency data from hundreds of
buildings. The data quality in Building Automation Systems is generally very good. The data will be anonymized but not require any further level of protection.
The TechLARP project aims to close the gender gap in technology studies and STEAM education in order to encourage young girls to pursue STEM and, particularly, computer science studies.
TechLARP project aims to develop an innovative, arts-based Tech Education programme that draws on LARP (Live Action Role Playing), creative coding, and wearable technologies to build the capacities of student teachers to design and implement effective tech education interventions that increase young girls’ engagement, motivation, and confidence to engage with technology and broader STEM studies more effectively.
The goal of this thesis is to develop and validation of creative Tech inspirational resources such as videos, story cards, and presentations showcasing possibilities in computer science.
The student(s) will have the possibility to influence the integration of wearable technologies, AI technologies, Virtual Reality, LARP costumes, and community building activities with mentors and role models.
Høiseth, M., & Jaccheri, L. (2011, October). Art and technology for young creators. In International Conference on Entertainment Computing (pp. 210-221). Berlin, Heidelberg: Springer Berlin Heidelberg.
Papavlasopoulou, Sofia, Michail N. Giannakos, and Letizia Jaccheri. "Empirical studies on the Maker Movement, a promising approach to learning: A literature review." Entertainment Computing 18 (2017): 57-78.
https://www.techlarp.eu/about/
Problem description: The European Commission has proposed the AI Act as part of the legal response to the disruptions felt by society from AI technologies. Article 14 of the AI Act describes a need for “Human Oversight” as part of human-in-the-loop control[1] for high-risk AI systems. This requirement underscores a fundamental tension and a critical area for exploration within the broader discourse of Responsible AI. In this project, we set out to study how to effectively embed human agency and control within increasingly autonomous systems to mitigate potential risks and ensure alignment with ethical and societal values.
[1] See for example: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4066781
The National Audit Office of Norway (Riksrevisjonen) is the watchdog of the Norwegian parliament, checking the efficiency and effectiveness of the government. Their work secures transparency of public spending and helps build citizens trust in government. This is crucial in a world where democratic values are under pressure.
For students interested in the interface between humans and AI-based machines, Riksrevisjonen offers a unique opportunity in getting access to do data collection in government agencies where the resulting insights can directly influence accountability and improve governance.
For more information see: www.riksrevisjonen.no
While there has been a long discussion about the potential of using Artificial Intelligence in private organizations, now more and more public organizations are implementing solutions to support their operations. From uses for fraud detection, chatbots, autonomous vehicles, or infrastructure monitoring, AI is gaining ground in applications for public administration. This project will be done in connection with SINTEF Digital and will involve data collection, analysis and reporting. The aim is to find out what is the status of AI adoption, what are the potential interesting uses, and what is the value that is realized.
There is considerable confusion about what AI brings of value for whom. This confusion is partly related to what we define as AI, but also because of the rapidly changing nature of AI technologies. In order to create a sustainable process of taking advantage of AI we need to have a framework for how we evaluate AI, and who benefits from AI and how.
In this task you will do a systematic literature review regarding the value of AI for society, organizations, and individuals. Based on the results of such an study you will engage in an empirical field study to generate new knowledge about the value of AI. The focus and the field for your study will be decided together with you. This task is within the discipline of information systems, and will rely heavily on literature from information systems, computer-supported cooperative work, and human-computer interaction. You are expected to create new empirical knowledge based on earlier research and own data generation.
Problem Statement: How can the public sector ensure trust and credibility in AI-generated analyses and decision-making?
Description: The use of artificial intelligence (AI) to analyse and disseminate map data provides new opportunities, but also challenges related to quality assurance and credibility. How can public actors ensure that AI-generated analyses based on spatial data are reliable and verifiable?
Topic: How can explainable AI contribute to trust in automated analytics? A case study on the use of AI for meaningful analysis and visualization of map data. Sociotechnical perspective.
Relevance: The thesis can contribute to the development of guidelines for Explainable AI used in public data services.
Collaboration: https://www.kartverket.no/
The majority of modern applications are written in the so-called high-level productivity languages such as Python, NodeJS, Javascript, etc. In contrast, computer architecture and hardware research is mostly driven by software written in compiled languages such a C, C++ etc. The mismatch limits our understanding of how these applications are executed on the hardware/processors. For example, while the code written in C, C++ is handled by the “front-end” structures like instruction cache, branch predictors etc. of a processor, Python and NodeJS application code is handled by the “back-end” structures like data cache. This is because Python and NodeJS runtimes/interpreters are treated as code at hardware level, while both the application code and data are treated at data. Consequently our understanding of how to build efficient hardware/processors for the bulk of these applications is limited.
To achieve the level of understanding needed, we require better tools to measure the behaviour of such workloads throughout the computing pipeline. This project is concerned with designing and exploring the space for such a tool that can precisely pinpoint which cache lines contain applications data and which ones have the application code. While the tool helps us track the information in the front end, we must understand its effects on the “front-end” and “back-end” components. Understanding the impact of application behaviour on these components is of utmost importance to address their inefficiencies. Understanding their execution behaviour will allows us to propose new methods that not only ensure the efficient execution of such applications from performance perspective but also with regards to energy-efficiency.
Sails are increasingly viewed as the most significant enhancement to the current international fleet of ships, offering a promising avenue for sustainable energy with minimal infrastructure requirements. With a rising number of vessels under development featuring sail support, and existing vessels being retrofitted with sails, the interest in harnessing wind energy is evident.
To allow wider adoption of sail ships, there is a need to have a smart planning software that accounts for optimal routing based on weather forecasting and operational requirements. This project will focus on designing user interfaces that can be used for route planning for hybrid sail ships. The proposed user interface should allow users to plan routes and oversee how the ongoing operational requirements and weather conditions may affect the planned route.
Supervisors: Michail Giannakos
Place: LCI Lab (https://lci.idi.ntnu.no/)
Suitable for: One or two students (2 recommended)
IntroductionScience-related subjects are some of the most influential subjects that children learn at an early age due to their ability to teach them to make observations, collect data, and come to conclusions logically. Skills like these are extremely valuable in everyday life. Additionally, research shows that children’s interest in science decreases or increases between the ages of 10 and 14, depending on their learning foundation and areas of interest. Some of the most frequent arenas for supporting children’s interest in science are science centers. Visits to science centers allow children to learn about different topics and scientific phenomena. Science centers benefit from mobile and interactive technologies, but it is unclear how different elements, such as gamification and AI, can support children’s interests. With this in mind, this thesis aims to investigate how interactive mobile technology (mobile application) enhanced with LLMs and conversational agents (e.g., open source examples CodeLLama, ParlAI, ChatterBot), as well as Multimodal LLMs can support children’s science learning.
Thesis DescriptionThe student(s) need to review the literature and familiarize themselves with relevant apps and AI technology. The focus is to integrate (M)LLM and conversational agent mechanics (e.g., allow children to ask questions during their visit) in an engaging way. Based on the best practices from the literature, the candidates will develop the app (e.g., following co-design or participatory design) and then do a user study either in school settings or in a science museum to empirically test the proposed system. Finally, the candidate will analyze the collected data and write up their thesis.
RequirementsThe ideal candidate will have a background in user experience, interface design, and use/integration of (M)LLMs. Solid front-end programming skills (JavaScript and CSS) and an interest in hands-on development and experimentation.
Programming skills: MySQL, JavaScript, CSS.
Examples of previous theses:
https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/3159709
https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/3159708https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/3019915https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/3019903
Investigate the use of Generative AI, such as Large Language Models (LLMs), to configure static analysis tools (such as SonarQube, PMD, etc.), with particular focus on defining customized rules. These tools are very useful for discovering software faults, but they are difficult to configure and to customize. This project wants to understand if and how LLMs can help with this task.
Static code analysis tools [1] (often referred to as “linters”), such as for example SonarQube [2], PMD [3], or SpotBugs [4], are widely used to identify common bugs and mistakes in programming. They are based on identifying coding patterns that are known to introduce faults or vulnerabilities in the code.
While extensive coding rules exists, such as SEI CERT Coding Standards for Java [5] for security, or MISRA C/C++ [6] for safety, these rules evolve with the discovery of new bugs and vulnerabilities, or with the introduction of new features of programming languages. Further, developers may want to define customized rules to cover internal patterns or coding standards that are adopted by their company.
Most of these tools can be customized with new rules, but the process is typically quite cumbersome (e.g., [7] [8]). Generative AI (GenAI) models such as Large Language Models (LLMs) has shown disruptive performance on tasks such as text processing and code generation, and research on the use of GenAI for software engineering tasks is emerging. This project aims to investigate how LLMs can help in configuring static analysis tools.
The idea is to use LLMs translate rules specified in natural language, to a configuration of the static analysis tool. Data will be obtained from the hundreds of rules already implemented in open source static analysis tools, such as PMD.
The long-term research objective linked to this activity is to simplify the definition and verification of coding rules, through the use of GenAI.
Virtual devices (camera, microphone) are becoming increasingly interesting for producing visual and audio effects in digital meetings. This project will investigate the creation of a virtual camera and a virtual microphone in Linux, Mac OS and Android.
Following this, some cool applications will be created, such as voice change and changing facial characteristics.
For inspiration, some well-known packages (that will not be used) include OBS Virtual Camera and VB Cable:
https://obsproject.com/forum/resources/obs-virtualcam.539/
https://vb-audio.com/Cable/
Visual SLAM is a term for a set of methods and algorithms that a) determine the motion of a camera (or a set of cameras) through an environment and b) determine the geometrical shape of that environment. vSLAM often builds on detecting “prominent points” in the images, and tracking them through the sequence. If a sufficient number of such points are tracked between two images, the relative pose (=translation and rotation) of the camera can be estimated. As any measurement in images is afflicted by errors, both these pose estimates as well as the estimated 3D positions of the observed image points are uncertain, and the estimation of the complete camera trajectory as well as the scene model “stitched together” from many views needs to be input data to a huge optimization problem.In AROS, we have access to both real video footage from underwater missions, as well as a realistic simulation environment which is able to generate video sequences where the motion and the 3D geometry are precisely known (‘ground truth’). The student project is integrated into our design and development process for a vSLAM system which is specifically tuned to be able with the substantial problems of underwater video material: limited visibility due to turbid water, bad illumination which is also moving with the robot vehicle, disturbances by plankton, dirt, and small fish, and many more. Which part of the vSLAM development is determined to be the focus area of the student project is subject to negotiation; the intention is to let the students experiment with novel approaches proposed in the recent literature, some of them focusing on geometric models and statistical estimation theory, others on machine learning. So we are able to adapt the topic largely to the background knowledge the student(s) already have, and their interest into different relevant research fields, such as e.g. state estimation, optimization, object detection and tracking, machine learning and deep learning.
Potential focus topics:* Robust keypoint tracking in the presence of underwater image degradation* Dynamic Model based prediction and correcting in keypoint and object tracking in underwater conditions* Pose graph and state sequence optimization for underwater visual SLAM* Integration of IMU measurements in underwater visual SLAM* Machine Learning for depth estimation, flow field estimation, and visual clutter detection
Literature:
D. Scaramuzza, F. Fraundorfer: Visual Odometry: Part I - The First 30 Years and Fundamentals. IEEE Robotics and Automation Magazine, 2011.F. Fraundorfer, D. Scaramuzza: Visual odometry: Part II - Matching, robustness, optimization, and applications. IEEE Robotics and Automation Magazine, 2012.
Cesar Cadena, Luca Carlone, et al.: Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age. 2016
H. Zhan et al: DF-VO: What Should Be Learnt for Visual Odometry? 2021
The current trend in Visualization centers around APIs that can be accessed within development environments such as Python:
https://www.geeksforgeeks.org/top-python-libraries-for-data-visualization/
The objective of this project will be to compare 5 Python Visualization libraries under a task that will be defined as part of the project.
Knowledge: Python.
Over the last years the emergence of key technologies such as big data analytics and artificial intelligence have given rise to a completely new set of skills that are needed in private and public organizations. With IT gaining an increasingly central part in the shaping of business strategies, it is important that study curricula follow these requirements and provide graduates that fit the needs of organizations. This project will be run in collaboration with the Big Data Observatory (https://www.observatory.no) and involve collecting data through focus groups and surveys with key representatives. The output will involve a detailed look at what skills are necessary and how they can be addressed by educational institutions.