Velg hva du ønkser å vise prosjekt for.
Task: Develop an app for mobile devices, that can be mounted in public transportation like busses, can access the camera of the device as well as relatively cheap and accurate positioning equipment with CPos corrections (cm accuracy) and have AI models for assessing and geo-referencing the condition of all road objects visible from the road (one application, other applications could be to create and update HD-maps, match real-time images to a reference for back-up localisation, collect data for neural rendering etc.).
[ Vis hele beskrivelsen ]
Collaboration: the project will be a collaboration between the national road and mapping authorities, SINTEF and several counties / municipalities. Huge innovation potensial.
[ Skjul beskrivelse ]
The vehicle industry, as well as software and hardware providers are rapidly developing sensor systems and artificial intelligence (AI) methods for sensing the road environment. Connected and Automated Vehicles (CAVs) are argued to have a large potential for accelerating traffic safety and efficiency. Digital twins allow not only to visualize how things work, but also simulate various future scenarios. This is particularly interesting for autonomous vehicles which can be trained in a simulated environment. Furthermore, changes to the algorithm can be validated in a digital twin before deployed on the vehicle. Building a digital twin of a nordic environment allows for development of AI techniques designed for such an environment.
Possible topics:- NeRF and Gaussian splats: create local environments based on data acquired with an autonomous platform. Dynamic environments that take into account vehicles, pedestrians and cyclists (e.g. MARS, StreetGaussians)
- Underwater NeRFs for representing shipwrecks and other underwater artefacts.
- Digital twins visualization: extend the currently available DigitalTwin of Gløshaugen area and make it more realistic. Final goal is to import it into Nvidia Omniverse so it is usable to train a network that is designed for our autonomous vehicle.
- Nvidia CloudXR: visualize a digital twin in VR/AR and simulate various driving conditions
Supervisors: Frank Lindseth, Gabriel Kiss (COMP/IDI)
The emergence of whole slide imaging technology (WSI) allows for digital pathology diagnosis. The applications of digital pathology are expanding, from lesion detection and segmentation, to quality assurance and prognostication. The specific application in this project is related to lung cancer staging and is a collaboration with St Olavs Hospital and Levanger Hospital. A relevant topic will be to develop and validate ML techniques for automatic assessment of WSIs from established, well-described cohorts of lung cancer patients.
Tasks:
Ultrasound is becoming the imaging modality of choice for cardiac interventions. During cardiac surgery the location of instruments, as well as anatomic landmarks is crucial information for the surgeons. Today most of these tools are localized manually or semi-automatically, however automating them would improve the accuracy and patient safety.
Possible topics:
- image denoising via diffusion models
- Generative models or GAN's for multi modal image synthesis
- AI for aortic valve detection in mitral trans esophageal acquisitions of the heart.
-3D segmentation of valves, full 3D segmentation of the mitral valve from echocardiographic data. U-Net or more advanced architecture to be considered
-3D tracking of structures of interest in echo recordings, visual transformer-based architectures to be investigated, temporal consistency to be enforced
The concept of e-governance proposes the use of ICT technologies towards operation and outreach. Extended-Reality technologies has been proposed as a tool to capture citizen perception towards strategies and policies. The question of “How” these technologies can be utilized for the most efficient transfer of knowledge, communication and capturing feedback remain unanswered to a high extent. Subjective constraints such as personal beliefs and experiences further complicate the problem of objectifying citizen output. Hence, the aim of the research is to build tools that can be used for efficient citizen engagement.
- XR application for citizen engagement
- Conceptualization of citizen engagement with XR
Augmented Reality (AR) can provide rich and interactive learning experiences and performance augmentation for remote distributed learners. The use of AR for collaboration and learning has significantly increased during the ongoing pandemic and has been widely adopted by several companies (e.g. Equinor), hospitals treating COVID-19 patients (https://www.businessinsider.com/london-doctors-microsoft-hololens-headsets-covid-19-patients-ppe-2020-5?r=US&IR=T) and educational institutions.
The goal of this master project is to perform research on the design principles and tools for AR-supported collaborative learning while working with several Hololens 2 units (https://www.microsoft.com/en-us/hololens/). Depending on the interests of the student(s), the project will be connected to a company (e.g. Equinor), St. Olavs hospital or a course taught at NTNU. The project is done in collaboration with IMTEL lab (https://www.ntnu.edu/ipl/imtel).
Supervisors: Gabriel Kiss (COMP/IDI), Ekaterina Prasolova-Førland (IMTEL/IPL)
XR in a teaching scenario can provide students an immersive, interactive experience that cannot be achieved in the real world. As such it can be provide rich and interactive learning experiences for local and distributed learners. Therefore, XR can be a useful tool for teaching computer graphics and computer science algorithms but also it has been widely used in the medical field. The expensive, bulky simulators can be replaced by VR/AR tools that provide a realistic experience. Furthermore, combining 3D printed artifacts or existing anatomic models and augmenting them with virtual elements has been proved a very effective teaching tools. Possible topics:- Virtual university, merging existing tools in a joint app, tools for teaching computer graphics / deep learning concepts in VR- Teaching echocardiography- Teaching fetal ultrasound acquisitions- Tools for understanding the relationship between heart function and cerebral blood flow in neonates (https://cimonmedical.com/neodoppler/)- Collaborative learning in AR/VR
The aim of this work is to extend a visualization framework using mixed reality, which may improve the way data is presented to an operator during a cardiac ultrasound exam or bronchoscopy procedure.One of the main challenges for an operator during the procedure is to process the data coming from various image sources displayed on several screens scattered around the room and combine this with apriori knowledge in the form of segmented anatomic models.By gathering information from important image sources using an existing open source research platform and presenting the data in an intelligent manner and then visualizing it in the operator's field of view, we hope to improve the ergonomic condition of the operator and increase the success rate of various procedure.Possible topics:- Bronchoscopy related- Echocardiography related