2024 SURE/SROP Research Projects in ECE

Directions: Below are listed the most recent descriptions of 2024 Summer Undergraduate Research in Engineering (SURE) and Summer Research Opportunity Program (SROP) projects available in Electrical and Computer Engineering (ECE). Please consider this list carefully before applying to the SURE or SROP program. You are welcome to contact faculty if you have additional, specific questions regarding these projects. 

*IMPORTANT*: In addition to their online application, SURE applicants for ECE projects must also submit a resume and statement explaining their interest in and qualifications for the project that most interests them, including why they want to work on the project, the relevant skills they bring, and what they expect from their experience. The statement should be no longer than one page (12 point font and 1” margins) and must be uploaded in “other” at the bottom of the online application. Applications without this information may not be considered. Please include your name and UMID on all documents submitted.

SROP applicants for ECE projects should follow the specific directions outlined in the online application.

Applied Electromagnetics & RF Circuits

Project title: Go Small or Go Home: Developing Smaller, Broader Bandwidth Antennas

Tony Grbic and Steve Young

Faculty mentors:
Anthony Grbic
[email protected]

Steve Young
[email protected]

Course format:
In person

Prerequisites:
Required: EECS 230
Preferred: EECS 330 or EECS 334

Student should have knowledge of time-harmonic electromagnetic fields (plane waves).

Description:
Space-time modulation has attracted strong interest within the fields of radio frequency (RF) circuits, applied electromagnetics, and optics in recent years. Progress in the availability and performance of tunable semi-conductor devices as well as electro-/magneto-optic, phase change, and 2D materials have drawn researchers to examine the modulation of electronic circuits and electromagnetic/optical devices in both space and time. Space-time modulation enables filtering (n-path circuits), frequency conversion, parametric effects, and more recently non-reciprocity (one-way transmission).

The student will investigate electrically-small (miniature) antennas whose properties vary as a function of time and space. Specifically, the student researcher will explore how modulation in time and space can be leveraged to enhance the bandwidth-efficiency product of small antennas in order to overcome fundamental bounds on standard linear, time-invariant antennas. The unique approach involves modulation of the antenna properties, via tunable circuit elements, to couple a radiative antenna mode to non-radiative antenna modes. Out-of-band non-radiative modes are used to as low-loss tanks/resonators in parametric gain and bandwidth-broadening processes. The student will simulate, build, test and characterize these antennas in the laboratory.

Knowledge of electromagnetic fields is required, and experience with microwave instruments (such as signal generators, network analyzers, spectrum analyzers) is an asset. Programming experience using MATLAB or Python is also desirable.

The student will gain marketable skills: a working knowledge of
space-time RF circuits, antennas and radiation, microwave measurement techniques, industry-standard RF simulators, and instrument automation.

Computer Vision

Project title: Augmented Reality Rehabilitation Environment

Jiasi Chen

Faculty mentor:
Jiasi Chen
[email protected]

Course format:
Hybrid (combination of online and in person)

Prerequisites:
Required: EECS 280
Preferred: EECS 281

Experience with Unity is desirable but not required.

Description:
Rehabilitation is a promising medical application of augmented reality (AR), with patients benefiting from additional at-home treatment sessions with an AR headset. A caretaker can also wear an AR headset to interact with the patient in the session. Non-AR virtual games only allow users to practice their motor skills in 2D, rather than 3D as provided by AR.

AR-based physiotherapy demands low latency, good spatial alignment consistency, and accurate semantic understanding with users moving around highly dynamic environments, which are all elements enabled by our current research. For different user activities, the AR application can adapt its
resource usage to meet its quality of service requirements in constrained home networks. For example, when practicing fine motor skills, accurate semantic understanding of the environment is needed, while when jumping between hoops, VI-SLAM localization accuracy is particularly important, potentially with edge support.

The goal of this summer project is to build a prototype application and demonstrate the feasibility of AR-based rehabilitation in constrained environments. The student participating in this project will gain experience in mobile application development, AR/VR hardware (Microsoft Hololens 2 and/or Meta Quest 3), and working with other students and faculty in a multi-campus research project with partners at Duke and CMU.

Project title: Using 3d cameras for passive impaired, drunk & drowsiness detection (pi-3d) system

Mohammed Islam

Faculty mentor:
Mohammed Islam
[email protected]

Course format:
In person

Prerequisites:
Required: Software programming (Python) to process 3D camera output
Preferred: Image processing, machine learning, optics & photonics

Description:
Three-dimensional (3D) cameras are becoming commodity items, as they are being used in smartphones, tablets, and AR/VR/mixed reality headsets for various applications.  Some of the 3D cameras we are using include indirect time-of-flight cameras from Infineon, direct time-of-flight cameras co-registered with infrared cameras from ST Microelectronics, and structured light cameras from ams-OSRAM.  Using these cameras, we are looking at features on a person’s face, such as facial blood flow, physiological parameters (e.g., heart rate and respiratory rate), and eye motion (e.g., blink rate, percent of eye closure).  The goal is to obtain the state of the driver in terms of impairment, intoxication or drowsiness. Ambient light sensitivity is minimized by using active illumination from LEDs or lasers, and motion artifacts are compensated by using the depth information from the 3D cameras as well as using AI-based face tracking.   Machine learning will also be used to determine personalized baselines for an individual, and then anomalous occurrences will be detected using algorithms such as anomaly detection, generative adversarial networks, or auto-encoders.  The first task in the project is to improve the software processing for facial blood flow and what is known as remote photo-plethysmography (rPPG).  Then, the second task will be to perform data fusion by combining facial blood flow data with eye movements and the physiological parameters.  Human studies will be conducted in the laboratory at first, and then with different environmental and exercise conditions.  Finally, the hardware and software will be integrated into a brass-board system, which will be used for in-vehicle testing to examine the state of the driver.

Control Systems

Project title: Comparison of Optimistic, Opportunistic, and Worst-case approaches for Safety

Necmiye Ozay

Faculty mentor:
Necmiye Ozay
[email protected]

Course format:
Hybrid (combination of online and in person)

Prerequisites:
Knowledge of linear algebra and control of linear dynamical systems. Knowledge of linear programming and probability is a plus. Familiarity with coding (matlab, python, or julia).

Description:
This project will investigate several invariance-based control algorithms for ensuring safety of dynamical systems. While worst-case methods have strong safety guarantees, they can lead to quite conservative behaviors (e.g., if we assume the worst-case for a vehicle, maybe the safest thing is not to operate the vehicle at all).

The question that will be investigated is how to enforce safety when making different assumptions on the unknowns in the system. We aim to develop a new class of safety controllers that can be adjusted to be more optimistic or pessimistic. If time permits, we will also investigate probabilistic notions of safety and algorithms for learning uncertainty models. There is also an opportunity to implement the developed algorithms on small drones in the lab.

For some background results from last year’s SURE project, please see: https://ozay-group.github.io/OppSafe/

Project title: Minimal communication for control

Necmiye Ozay

Faculty mentor:
Necmiye Ozay
[email protected]

Course format:
In person

Prerequisites:
Knowledge of linear algebra and control of linear dynamical systems. Familiar with coding (ideally in Python or Julia).

Description:
In a growing number of real-world applications (e.g., wireless sensor networks), controllers are implemented using distributed components (i.e., sensors and actuators) which must coordinate via limited resources. In these cases, sensors and actuators may send messages to one another over a communication network subject to channel bandwidth constraints.

In this research project, we are interested in the problem of controlling a system with a minimum average number of sensors-to-actuators messages. While this problem has been solved recently in the finite-horizon setting (see [1]), we will tackle the case of an infinite-horizon.

[1]: Antoine Aspeel, Jakob Nylof., Jing Shuang (Lisa) Li, Necmiye Ozay, A Low Rank Approach to Minimize Sensor-to-Actuator Communication in Finite Horizon Output Feedback. (available at http://arxiv.org/abs/2311.08998)

Embedded Systems

Project title: Augmented Reality Rehabilitation Environment

Jiasi Chen

Faculty mentor:
Jiasi Chen
[email protected]

Course format:
Hybrid (combination of online and in person)

Prerequisites:
Required: EECS 280
Preferred: EECS 281

Experience with Unity is desirable but not required.

Description:
Rehabilitation is a promising medical application of augmented reality (AR), with patients benefiting from additional at-home treatment sessions with an AR headset. A caretaker can also wear an AR headset to interact with the patient in the session. Non-AR virtual games only allow users to practice their motor skills in 2D, rather than 3D as provided by AR.

AR-based physiotherapy demands low latency, good spatial alignment consistency, and accurate semantic understanding with users moving around highly dynamic environments, which are all elements enabled by our current research. For different user activities, the AR application can adapt its
resource usage to meet its quality of service requirements in constrained home networks. For example, when practicing fine motor skills, accurate semantic understanding of the environment is needed, while when jumping between hoops, VI-SLAM localization accuracy is particularly important, potentially with edge support.

The goal of this summer project is to build a prototype application and demonstrate the feasibility of AR-based rehabilitation in constrained environments. The student participating in this project will gain experience in mobile application development, AR/VR hardware (Microsoft Hololens 2 and/or Meta Quest 3), and working with other students and faculty in a multi-campus research project with partners at Duke and CMU.

Integrated Circuits and VLSI

Project title: Design and test of circuits for wireless communication and analog computing

Michael Flynn

Faculty mentor:
Michael Flynn
[email protected]

Course format:
In person

Prerequisites:
Required: 215, Matlab
Optional (but helpful): 311/312

Description:
Our lab creates new circuits for sensing, wireless communication, and machine learning. The focus is on integrated-circuit analog, RF, and mixed-signal circuits. Key projects include wireless beamforming for 5G and 6G communication and exploring the use of analog circuits for computation. This SURE project will support our group’s research on these topics. The SURE student will work with senior graduate students. Tasks include circuit design, designing and fabricating PCBs, writing support software, programming FPGAs and creating 3D printed parts.

Optics & Photonics

Project title: Automatic laser beam direction stabilization at the ZEUS Laser Facility

Bixue Hou

Faculty mentor:
Bixue Hou

Contact:
Elizabeth Oxford
[email protected]

Course format:
In person

Prerequisites:
None.

Description:
In a laser system, laser beam direction can randomly drift around due to vibration of optical table, air turbulence in laser enclosure, mechanical imperfection in optics mounts/holders, or heat effect in optics or mechanics. The drift can significantly affect experimental results. It is especially vitally important for ZEUS colliding experiments.

In this project, we would like to develop a computer-controlled optical setup to automatically correct the drift in laser system. In the laser beam, we will install two motorized mirror mounts. The optical setup includes a near-field camera and a far-field camera These camera signals are used to control the motorized mirror mounts in order to correct the drift. We also need to develop a software for the mirror control. Language used for the software is LabVIEW.

Project title: Laser Engineering, Modeling and Analysis at ZEUS High Power Laser Facility

Karl Krushelnick and Louise Willingale headshots

Faculty mentors:
Karl Krushelnick
Louise Willingale

Contact:
Elizabeth Oxford
[email protected]

Course format:
In person

Prerequisites:
None

Description:
Zeus is a 3 Petawatt (10^15 Watts) high power laser facility at the University of Michigan funded by the US National Science Foundation which will operate as a facility for US researchers in high field science as well as for the wider international research community. It will be the highest-power laser system in the US by a factor of three and will be among the highest-power lasers worldwide for at least the next decade.

This project will involve assisting U-M graduate students and research scientists at the ZEUS facility in setting up high power laser experiments, making measurements in the target areas, as well as some numerical modeling and data analysis.

Project title: Silicon photonics automation

Di Liang

Faculty mentor:
Di Liang
[email protected]

Course format:
Hybrid (combination of online and in person)

Prerequisites:
No hard requirement, but ideally have taken EECS 183, EECS 320, EECS 429 or EECS 434 to gain basic concept of optical waveguide, resonators, etc. Familiarity with Python or Labview to control hardware is a big plus.

Description: Silicon photonics is a disruptive technology reshaping the landscape of optical communication. Millions of silicon photonics chips are deployed each year to transmit exponentially growing data inside and outside the data centers globally.

This project aims to build a highly efficient automatic silicon photonics characterization platform to measure photonic chips like microelectronic chips. It involves developing algorithms to coordinate testing instruments, optical coupling positioners and wafer stage in the lab, and embedding some basic data analysis scripts to instantly calculate waveguide loss, resonator quality factor, etc.  

Project title: Using 3d cameras for passive impaired, drunk & drowsiness detection (pi-3d) system

Mohammed Islam

Faculty mentor:
Mohammed Islam
[email protected]

Course format:
In person

Prerequisites:
Required: Software programming (Python) to process 3D camera output
Preferred: Image processing, machine learning, optics & photonics

Description:
Three-dimensional (3D) cameras are becoming commodity items, as they are being used in smartphones, tablets, and AR/VR/mixed reality headsets for various applications.  Some of the 3D cameras we are using include indirect time-of-flight cameras from Infineon, direct time-of-flight cameras co-registered with infrared cameras from ST Microelectronics, and structured light cameras from ams-OSRAM.  Using these cameras, we are looking at features on a person’s face, such as facial blood flow, physiological parameters (e.g., heart rate and respiratory rate), and eye motion (e.g., blink rate, percent of eye closure).  The goal is to obtain the state of the driver in terms of impairment, intoxication or drowsiness. Ambient light sensitivity is minimized by using active illumination from LEDs or lasers, and motion artifacts are compensated by using the depth information from the 3D cameras as well as using AI-based face tracking.   Machine learning will also be used to determine personalized baselines for an individual, and then anomalous occurrences will be detected using algorithms such as anomaly detection, generative adversarial networks, or auto-encoders.  The first task in the project is to improve the software processing for facial blood flow and what is known as remote photo-plethysmography (rPPG).  Then, the second task will be to perform data fusion by combining facial blood flow data with eye movements and the physiological parameters.  Human studies will be conducted in the laboratory at first, and then with different environmental and exercise conditions.  Finally, the hardware and software will be integrated into a brass-board system, which will be used for in-vehicle testing to examine the state of the driver.

Power & Energy

Project title: Equity-Informed Electricity Tariff Design and Electricity Usage Recommendations

Johanna Mathieu

Faculty mentor:
Johanna Mathieu
[email protected]

Course format:
In person

Prerequisites:
Basic computer programming (MATLAB and/or Python preferred)

Description:
We will explore possible equity-informed designs for electricity tariffs and develop a suite of appliance usage/replacement recommendations to minimize residential electricity costs. This project may include the following tasks:

  1. Analyze high frequency residential electricity usage data and characterize commonalities across usage profiles
  2. Explore and define equity specifications for electricity tariff structures
  3. Conduct a comparative analysis of common tariff structures on equity specifications to inform tariff design
  4. Design an electricity tariff that satisfies or optimizes over the equity specifications
  5. Design a method for validating the equitable electricity tariff and conduct a case study using the high frequency residential electricity usage data
  6. Analyze high frequency, home circuit level, electricity usage data to develop baseline appliance usage models
  7. Define methods of soliciting or inferring from data household characteristics relevant to usage and billing
  8. Extend these baseline models to be tunable such that the effects of qualitative household characteristics and constraints can be reflected in the output
  9. Consider ways to validate the model’s approximation of the effects incurred by the recommendations
  10. Explore how disaggregation of electricity time series data can be used to develop a baseline appliance usage model in the absence of circuit level monitoring

Network, Communication & Information Systems

Project title: Reinforcement learning algorithms for communication networks

Faculty mentor:
Lei Ying
[email protected]

Course format:
In person

Prerequisites:
Familiarity with reinforcement learning algorithms, pytorch, communication systems

Description:
Improve the performance of reinforcement learning algorithms for communication systems/networks.

Quantum Engineering Science & Technology

Integration of single atoms to nanophotonic devices for quantum information technologies

Alex Burgers

Faculty mentor:
Alex Burgers
[email protected]

Course format:
In person

Prerequisites:
Familiarity with Python and completion of laser safety training once in the lab

Description:
Quantum information science and engineering is a rapidly expanding field of research. Technologies utilizing quantum properties of light and matter can surpass classical counterparts. Our lab investigates single atoms coupled to nanophotonic structures for integration with on-chip photonic circuits as an architecture for future quantum technologies.

In this SURE project, the student will work with the PI and a senior graduate student to plan and construct the cold atom apparatus for controlling individual atoms and delivering these atoms to the nanophotonic structures using optical tweezers. The student will gain knowledge of optical systems, experimental control software, nanophotonics, and quantum technologies.

Signal & Image Processing and Machine Learning

Project title: Brain-Inspired Artificial Intelligence

Zhongming Liu

Faculty mentor:
Zhongming Liu
[email protected]

Course format:
In person

Prerequisites:
Machine Learning, Signal Processing, Python, PyTorch

Description:
We draw inspirations from the brain to design models that support human-like behaviors. We train the models to learn representations of images, videos, texts without explicit supervision. We test the models for navigating new environments and performing challenging tasks. We also test the models against human behaviors and brain activity measured with functional magnetic resonance imaging.

Project title: Efficient Diffusion Models for Robust Scientific Machine Learning

Qing Qu

Faculty mentor:
Qing Qu
[email protected]

Course format:
Hybrid (combination of online and in person)

Prerequisites:
None.

Description:
The project aims to develop efficient methods using diffusion models, requiring minimal data and computational resources, to address scientific machine learning challenges while enabling better control over data generation.

This project will advance the field of diffusion models and aid scientific discovery using machine learning. Specifically, the proposed work has the following objectives:

  1. We will develop computationally and data efficient diffusion models, where we develop efficient latent diffusion models for inverse problems by exploring pixel space redundancy and design multi-stage training framework by exploiting the redundancy in network architectures
  2. We will explore the diffusion purification approach to improve the model robustness against data corruption and distribution shifts when we deploy deep learning models
  3. We will develop more interpretable and controllable data generation process by better understanding the induced mapping between noise and image space
  4. We will test and deploy the proposed methods to address the unmet demands of real-world scientific and medical applications.

Project title: Using 3d cameras for passive impaired, drunk & drowsiness detection (pi-3d) system

Mohammed Islam

Faculty mentor:
Mohammed Islam
[email protected]

Course format:
In person

Prerequisites:
Required: Software programming (Python) to process 3D camera output
Preferred: Image processing, machine learning, optics & photonics

Description:
Three-dimensional (3D) cameras are becoming commodity items, as they are being used in smartphones, tablets, and AR/VR/mixed reality headsets for various applications.  Some of the 3D cameras we are using include indirect time-of-flight cameras from Infineon, direct time-of-flight cameras co-registered with infrared cameras from ST Microelectronics, and structured light cameras from ams-OSRAM.  Using these cameras, we are looking at features on a person’s face, such as facial blood flow, physiological parameters (e.g., heart rate and respiratory rate), and eye motion (e.g., blink rate, percent of eye closure).  The goal is to obtain the state of the driver in terms of impairment, intoxication or drowsiness. Ambient light sensitivity is minimized by using active illumination from LEDs or lasers, and motion artifacts are compensated by using the depth information from the 3D cameras as well as using AI-based face tracking.   Machine learning will also be used to determine personalized baselines for an individual, and then anomalous occurrences will be detected using algorithms such as anomaly detection, generative adversarial networks, or auto-encoders.  The first task in the project is to improve the software processing for facial blood flow and what is known as remote photo-plethysmography (rPPG).  Then, the second task will be to perform data fusion by combining facial blood flow data with eye movements and the physiological parameters.  Human studies will be conducted in the laboratory at first, and then with different environmental and exercise conditions.  Finally, the hardware and software will be integrated into a brass-board system, which will be used for in-vehicle testing to examine the state of the driver.

Solid-State Devices and Nanotechnology

Project title: High-performance III-nitride optoelectronics

Yuanpeng We

Faculty mentor:
Yuanpeng We
[email protected]

Course format:
In person

Prerequisites:
EECS 320 Introduction to Semiconductor Devices

Description:
Our lab works on developing high-performance optoelectronic devices based on III-nitride wide-bandgap semiconductors. III-nitride optoelectronic devices, such as LEDs and lasers, have been a critical technology for a broad range of applications including quantum computing, biosensing, visible light communication, etc.

In this project, the student will be involved in experiments including material characterizations, numerical simulations, device fabrication and testing. 

Project title: Using 3d cameras for passive impaired, drunk & drowsiness detection (pi-3d) system

Mohammed Islam

Faculty mentor:
Mohammed Islam
[email protected]

Course format:
In person

Prerequisites:
Required: Software programming (Python) to process 3D camera output
Preferred: Image processing, machine learning, optics & photonics

Description:
Three-dimensional (3D) cameras are becoming commodity items, as they are being used in smartphones, tablets, and AR/VR/mixed reality headsets for various applications.  Some of the 3D cameras we are using include indirect time-of-flight cameras from Infineon, direct time-of-flight cameras co-registered with infrared cameras from ST Microelectronics, and structured light cameras from ams-OSRAM.  Using these cameras, we are looking at features on a person’s face, such as facial blood flow, physiological parameters (e.g., heart rate and respiratory rate), and eye motion (e.g., blink rate, percent of eye closure).  The goal is to obtain the state of the driver in terms of impairment, intoxication or drowsiness. Ambient light sensitivity is minimized by using active illumination from LEDs or lasers, and motion artifacts are compensated by using the depth information from the 3D cameras as well as using AI-based face tracking.   Machine learning will also be used to determine personalized baselines for an individual, and then anomalous occurrences will be detected using algorithms such as anomaly detection, generative adversarial networks, or auto-encoders.  The first task in the project is to improve the software processing for facial blood flow and what is known as remote photo-plethysmography (rPPG).  Then, the second task will be to perform data fusion by combining facial blood flow data with eye movements and the physiological parameters.  Human studies will be conducted in the laboratory at first, and then with different environmental and exercise conditions.  Finally, the hardware and software will be integrated into a brass-board system, which will be used for in-vehicle testing to examine the state of the driver.


See past SURE/SROP Projects