Postdoc/PhD

The following Postdoc/Ph.D. positions are available:

Reconfigurable hardware design for 5G applications

The Institute of Computer Engineering, Chair of Processor Design offers in a collaborative project for novel design methodologies and heterogeneous manycore architectures for future 5G communication standards and beyond of 1st February 2020 a position as

 

Research Associate / PhD Student / Postdoc

(subject to personal qualification employees are remunerated according to salary group E 13 TV-L)

 

Research area   Reconfigurable hardware design for 5G applications
Terms The position is limited to 31 Jan 2022 (with the option to be extended).
The period of employment is governed by the Fixed Term Research Contracts Act (Wissenschaftszeitvertragsgesetz – WissZeitVG).

 

(Download details)

The project is a collaboration with National Instruments, the Chair of Compiler Construction headed by Prof. Jeronimo Castrillon and the Vodafone Chair of Mobile Communications headed by Prof. Gerhard Fettweis.


Position and Requirements
At the Chair of Processor Design we have the long-term vision of shaping the way future electronic systems are to be designed.  This project involves realising the applications designed within the National Instruments LabView environment onto a partially reconfigurable FPGA-based platform. The dynamism of the applications (with regards to the dynamic Quality of Service and number of users at runtime) and the dependencies between the tasks within the applications will be analysed. The obtained information will then be used to dynamically schedule and map the tasks at runtime on the underlying FPGA-based platform. This is to make sure that the changes made for some particular applications at runtime will not interfere with other applications within the system. The whole process is expected to be automated with as little manual work from the designer as possible.


The person is expected to take role in:
    • communicating with both Application and Compiler groups to gain insights into the application behaviors and requirements (in terms of hardware resources, performance, etc.);
    • analysing the FPGA system currently generated by LabView NXG to design a compatible partially reconfigurable FPGA-based system which can work seamlessly with NI devices;
    • designing a system template which can be easily generated with different parameters (such as number of processing elements, communication throughput, etc.);
    • working with the Application group to agree on the standard hardware-based implementation of the tasks to be compatible with the partial reconfiguration design flow and with the intended system template;
    • designing the design automation tools to automate the implementation process: extracting necessary modules generated from LabView NXG, packaging them into IPs that can be imported into Vivado for partial reconfiguration design flow, generating the bitstreams and partial bitstreams;
    • publishing the works in international conferences and/or journals.


The successful candidate must have:
    • an university degree in computer science or electrical engineering, and if applicable a PhD;
    • strong FPGA design/architecture background with either Xilinx (preferred) or Intel FPGA;
    • strong background in HDL either Verilog or VHDL;
    • proficiency in C/C++;
    • good knowledge of Computer Architecture and algorithm design;
    •  good publication record (for Postdoc position) and good communication skills.
The following skills will provide an added advantage:
    • good knowledge of System-on-Chip architecture and design with related concepts such as multi-core, mutli-processor, network-on-chip, communication interfaces (AXI, AXI-Stream, etc), DMAs, etc.;
    • familiarity with LabView NXG and TCL script.

What we offer
You will join a team of enthusiastic researchers who pursue creatively their individual research agenda. Other ongoing projects at the Chair of Processor Design can be found at https://www.cfaed.tu-dresden.de/pd-about. The chair is a part of the Cluster of Excellence “Center for Advancing Electronics Dresden”, which offers plenty of resources and structures for career development.
Informal inquiries can be submitted to Prof. Dr. Akash Kumar, Tel +49 (351) 463 39274; Email: akash.kumar@tu-dresden.de
Applications from women are particularly welcome. The same applies to people with disabilities.

Application Procedure
Your application (in English only) should include: motivation letter, CV, copy of degree certificate, transcript of grades (i.e. the official list of coursework including your grades) and proof of English language skills. Complete applications should be submitted preferably via the TU Dresden SecureMail Portal https://securemail.tu-dresden.de by sending it as a single pdf document quoting the reference number PhD19012-PD in the subject header to akash.kumar@tu-dresden.de or by mail to: TU Dresden, Fakultät Informatik, Institut für Technische Informatik, Professur für Prozessorentwurf (Processor Design), Herrn Prof. Akash Kumar, Helmholtzstr. 10, 01069 Dresden, Germany. (Please note: We are currently not able to receive electronically signed and encrypted data). The closing date for applications is 20.12.2019 (stamped arrival date of the university central mail service applies). Please submit copies only, as your application will not be returned to you. Expenses incurred in attending interviews cannot be reimbursed.

-Diplomarbeit Positions

The following projects (Diplom-, Master- und Studienarbeiten) are available:

OpenCV Acceleration for PYNQ-based Ultra96 Platform
An example of a Computer Vision platform suggested by Xilinx (https://github.com/Xilinx/PYNQ-ComputerVision)

OpenCV is one of the most wildly used library in computer vision/image processing domain. However, in embedded systems, due to the resource and power constraints, they cannot offer the processing performance as high as the desktop-grade systems. Therefore, for image-processing-focused embedded systems (such as smart security cameras), dedicated image processing chips (beside GPU) have to be integrated to accelerate the operations while minimizing the power consumption. While this approach does offer many advantages (in terms of cost, performance and power), there are some limitations: (1) those chips only support a limited number of functionalities, (2) if more features are needed, new chips have to made to replace the old ones. In most (if not all) cases, the system has to be re-designed as the physical footprint of the new chips might be different.

This project works on accelerating OpenCV-based applications using FPGA. The platform on the FPGA does not target any specific application. It provides a mechanism to seamlessly accelerate standard OpenCV functions by allocating corresponding hardware functions to process the data instead of using the CPU. The accelerator-rich FPGA platform is partially reconfigurable with multiple slots to load the hardware functions at runtime. The accelerators are implemented based on the HLS-compatible OpenCV library provided by Xilinx.

Goals of this project and potential tasks

The project covers all levels of system design from hardware design, design automation, device driver, resource management with optimization in mapping and scheduling, to application analysis to make sure that the proposed framework is as transparent to the user as possible. Therefore, there are multiple phases needed to successfully deliver the project. For the current Master project/thesis, only the first two phases are presented here. The final goals of the project will be achieved by the follow-up projects.

Phase 1: 4 - 6 months

Experiment with a small number of OpenCV functions (provided by Xilinx in the xfOpenCV library) in different categories such as Image Arithmetic, Filters, Geometric Transform, Features extraction, etc.

Phase 2: 6 months

Analyze to generalize the hardware interface and the communication behavior required by different OpenCV functions in different categories with design automation tool. A preliminary resource management framework used to manage the partial bitstreams, FPGA resources as well as mapping and scheduling of the accelerators is expected in this phase.

Skills acquired in this project

  • Hands-on experiences with FPGA development with advanced topics such as Partial Reconfiguration
  • Hands-on experiences with designing embedded system with hardware/software co-design analysis for performance and energy efficiency
  • Advanced technical report writing

Pre-requisite

  • Digital design with VHDL/Verilog
  • Knowledge of computer architecture
  • C/C++, Python

Helpful Skills

  • Knowledge about image processing algorithms.
  • Knowledge about High-level Synthesis
  • Work independently

Contact Information

 

Any-chip (FPGA) Temperature Sensor using Ring-Oscillator
An example of the thermal hotspot mitigation results in our group

In chip design, thermal is one of the major issues which affect the correctness of the operations as well as the reliability and lifetime of the chip. Unfortunately, most of the FPGAs only have one or two temperature sensors in the middle of the chip. As the FPGAs are getting bigger which are capable of incorporating many accelerators or soft-core processors, these sensors cannot give much information of which component is the thermal hotspot on the chip. As a result, almost every thermal-aware mapping and scheduling algorithm proposed for such systems to mitigate the thermal hotspots is only evaluated in the simulation environment with the combination of power estimation and thermal simulation tools such as Hotspot.
Ring-oscillator is a viable solution. The ring-oscillators can be instantiated and placed on the FPGA as many as needed. With a proper design and calibration methodologies, they are very useful in providing a thermal map of the entire chip. In this case, the scheduling and mapping algorithm can be evaluated on the real platform with the real inputs from the operation of the chip. There are many research works that are tackling this issue. Nevertheless, there is no such design automation tool that automatically takes the design as input, insert the ring-oscillators at the appropriate locations on the FPGA to accurately capture the thermal map of the chip.


Goals of this project and Potential tasks

  • Literature survey on the existing techniques in having ring-oscillators on the FPGA
  • Implement ring-oscillators on the FPGA
  • Create a C/C++ Linux driver running on the ARM processor inside the FPGA to read out and build the thermal map of the chip
  • Analyze the original design as input to automatically insert and place the ring-oscillators on the FPGA

Skills acquired in this project

  • Hands-on experiences with FPGA development with advanced topics such as Partial Reconfiguration and automatic floorplanning
  • Design analysis with automation tool
  • Advanced technical report writing

Pre-requisite

  • Digital design with VHDL/Verilog
  • Knowledge of computer architecture
  • Knowledge about FPGA architecture
  • C/C++

Helpful Skills

  • Knowledge about TCL script to automate the design steps in Xilinx Vivado
  • Work independently

Contact Information

 

Coffee Machine Usage Automatic Logging Device with Person Detection/Recognition
An example of the coffee machine usage logging with the inefficient paper/pen and the smart device with the LCD for display and Camera + Microphone for Person Detection/Recognition through image or voice

In our lab, the usage of the coffee machine is done by using the very inefficient paper-and-pen method. When one particular user takes one coffee from the machine, he/she needs to look for his name on the paper sheet, and make a tick accordingly. At the end of a quarter, one person in charge must take the sheet, calculate the total cost for buying coffee beans/milk divided by the total number of cups taken by everybody. The contribution by everybody will be proportional to the number of cups he/she took.

In this project, we would like to have a smart device to take care of this tedious task. It can recognize that somebody is taking the coffee by doing person detection. After that, it asks for the permission to recognize the person through the face by using the camera or through the voice by using the microphone. If that person does not agree to do so, then ask him/her to speak his/her name. If the information associated to that person cannot be found, then the device asks for more information to store into the database. The person in charge of buying the coffee beans and milk needs to have a separate login portal to log the money spent and to issue the command to calculate the contribution from the users. All of these processing must be done locally on the device. The device communicates with the users through the touch LCD screen. In our labs, there are two coffee machines in two different rooms, and the users can use any of the coffee machine they want. Therefore, there will be two such devices in two rooms, one will act as a server to store all of the data to synchronize the usage of two coffee machines. These two boards will communicate through wifi.


Goals of this project and Potential tasks

The project covers different levels of system design in which the student has to write the user interface to interact with the users, implementing simple database backend framework for two boards to access to store/retrieve the users’ data and expenses, implementing person detection, face recognition and voice recognition algorithms using machine learning. These algorithms can be implemented on FPGA or with the external AI inference device (Intel Movidius stick, Google Coral, etc.) depending on the background of the student.


Phase 1: 3-4 months

Implement the core functions of the system: person detection, face recognition and voice recognition algorithms using machine learning


Phase 2: 2 months

Interface with the peripherals: camera + microphone + LCD touch screen. Implement the backend database for two board to have access to. Design the user interface.

Skills acquired in this project

  • Hands-on experiences with embedded system development and interfacing with peripherals such as monitor, camera and microphone and AI inference device.
  • Hands-on experiences with designing embedded system with hardware/software co-design analysis for performance and energy efficiency (when the external device is needed for AI inference)
  • Advanced technical report writing


Pre-requisite

  • C/C++, Python
  • FPGA development (if want to work with FPGA)


Helpful Skills

  • Knowledge about image processing algorithms.
  • Knowledge about machine learning
  • Knowledge about database, web design
  • Knowledge of computer architecture
  • Work independently


Contact Information

Apportion for Life: Operating Environment-aware Partitioning in FPGA-based Systems for Improving Lifetime Reliability

FPGAs (Field Programmable Gate Arrays) are being increasingly used across diverse application areas -- Healthcare, Military, Telecom, Automobiles, etc. Such diversity results in widely varying operating conditions for such FPGA-based embedded systems. Consequently, the rate of different type of physical faults witnessed by the system can differ by large margins. Further, in areas such as space exploration, repair by replacement can be near-impossible. Therefore, the mission life of systems can be one of the major design objectives. In addition, with the predicted growth in the number of IoT devices and the rising usage of FPGA-based edge devices, an increase in the system’s operational life can lead to lower electronic waste and more sustainable computing.

Project Goals:

  • Optimization across multiple system partitioning methods --- HW/HW, SW/SW and HW/SW --- for designing lifetime aware FPGA-based systems.
  • Model the impact of external and internal physical fault-causing mechanisms on a Dynamic Partially Reconfigurable (DPR)-based system.
  • Simulate/Emulate a functional DPR system on FPGAs with relevant fault-injection fault-diagnosis and repair mechanisms.

 

Skills Acquired:

  • Hands-on development with FPGA-based systems.
  • System-level modelling and design.
  • Multi-objective optimization across different system goals – performance, reliability, power dissipation etc.
  • Technical writing for research publications

Pre-requisites:

  • Knowledge of FPGA architecture.
  • Knowledge of real-time scheduling and related concepts
  • Programming skills: C/C++, Python

Additional Optional skills:

  • System-level design with VHDL/Verilog/SystemC
  • Accelerator design with Xilinx SDSoC, Vivado
  • Optimization tools and methods.


Contact Information:

 

Related articles:

  • Siva Satyendra Sahoo, Tuan Duy Anh Nguyen, B. Veeravalli, Akash Kumar, "Lifetime-aware Design Methodology for Dynamic Partially Reconfigurable Systems" , In Proceeding: 2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC), vol. , no. , pp. 1-6, Jan 2018.
  • S.S. Sahoo, T.D.A. Nguyen, B. Veeravalli, A. Kumar, "Multi-objective design space exploration for system partitioning of FPGA-based Dynamic Partially Reconfigurable Systems" , In Integration, November 2018.
  • S. S. Sahoo, T. D. A. Nguyen, B. Veeravalli and A. Kumar, "QoS-Aware Cross-Layer Reliability-Integrated FPGA-Based Dynamic Partially Reconfigurable System Partitioning," 2018 International Conference on Field-Programmable Technology (FPT), Naha, Okinawa, Japan, 2018, pp. 230-233.
FPGA-based Artificial Neural Network Accelerator
CIFAR-10 Dataset for Imagen Classification

Goals of this project and Potential tasks

  • Literature survey on the existing FPGA-based hardware artificial neural network (ANN) accelerators
  • Implement a small Convolution Neural Network (CNN)with different design trade-offs on the FPGA
  • Providing the opportunity to use different approximate arithmetic units for energy efficiency

Skills acquired in this project

  • Hands-on experiences with FPGA-based development
  • Hands-on experience with ANNs
  • Advanced technical report writing

Pre-requisite

  • Digital design with VHDL/Verilog
  • Knowledge about ANNs
  • Knowledge about FPGA architecture
  • C/C++, Python

Helpful Skills

  • Knowledge about TCL script to automate the design steps in Xilinx Vivado
  • Work independently

Contact Information

Approximate Multiplier for FPGAs

Approximate multipliers have recently gained broad attention considering the ever-increasing application of error-resilient programs such as machine learning and multimedia as multiplication is the key operation in their computational core. Resource metrics such as performance, power and energy dissipation is of more importance in these applications as the output can tolerate a relaxed precision. Accelerators, such as FPGAs, are prime candidates for implementation of aforementioned multiplication-exhaustive programs. However, look-up table (LUT) based multipliers consume more area associated with higher latency than their Application Specific Integrated Circuit (ASIC) counterparts. In this project, we look for a implementing an area- and delay-efficient LUT-based multiplier for FPGAs. One interesting approach to implement multiplication is to translate it to addition using approximate algorithms such as Mitchell’s and provide area efficiency. In a furthur step, we also intend to exploit low-latency adders based on online arithmetic which significantly decrease delay.

Goals of this Thesis and Potential Tasks

  • Developing hardware implementation of approximate FPGA-specific adders and multipliers with different bit-width.
  • Developing functionally equivalent behavioral models using software languages like C++ or and testing in different benchmark applications.
  • Utilizing and assess approximate multiplier in real-world applications such as Deep Neural Networks or Digital Signal Processing.

Pre-Requisites and helpful skills

  • FPGA development and programming (Verilog/VHDL, Vivado)
  • Software programming (Java/C++/Python/Matlab)

Contact information for more details

References

  • Saadat, Hassaan et al, "Minimally Biased Multipliers for Approximate Integers and Floating Point Multiplication." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, (2018).
  • Shi, Kan, and George A. Constantinides. "Evaluation of design trade-offs for adders in approximate datapath." HiPEAC Workshop on Approximate Computing, 2015.
  • Shi, Kan. "Design of approximate overclocked datapath" (2015).
Approximate Divider

Approximate dividers have recently gained extreme attention in top conferences for two reasons. First, although they are less frequent than multiplication, they are still inevitable operations in ever-increasing machine learning and multimedia applications. Second, their resource consumption and latency are multiple times of multipliers which made them the bottleneck of application (in energy and speed). However, our analysis show with approximation techniques, we can improve these metrics at least by 4x. In this project, we look for a implementing an area-, energy-, and delay-efficient divider for FPGA and ASIC platforms. The interesting point in our approximation approach is that it simplifies division to shift and subtraction. This divider can be used in neural network and improve resource metric while it has a negligible impact on the accuracy of classification. The project period is about 3-6 months and it is based on a recent accepted paper in ASP-DAC 2020 conference. As it would be an extention of a already published paper, we target to submit it ASAP to a journal.

Goals of this Thesis and Potential Tasks

  • Developing hardware implementation of approximate dividers with different bit-width.
  • Developing functionally equivalent behavioral models using software languages like C++ or and testing in different benchmark applications.
  • Utilizing and assess approximate divider in real-world applications such as Deep Neural Networks or Digital Signal Processing.

Pre-Requisites and helpful skills

  • FPGA development and programming (Verilog/VHDL, Vivado)
  • Software programming (Java/C++/Python/Matlab)

  
Contact information for more details

References

  • Ebrahimi, Zahra et al, "LeAp: Leading-one Detection-based Softcore Approximate Multipliers with Tunable Accuracy.", IEEE/ACM Asia and South Pacific Design Automation Conference (ASP-DAC), 2020.
  • Saadat, Hassaan et al, "Approximate Integer and Floating-Point Dividers with Near-Zero Error Bias." IEEE/ACM Design Automation Conference (DAC), 2019.
  • Saadat, Hassaan et al, "Minimally Biased Multipliers for Approximate Integers and Floating Point Multiplication." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), 2018.
Sacrificing flexibility in FPGAs: FPGA Architecture based on AND-Inverter Cones
A logic element based on And_inverter Cones

Look-up-Tables based FPGAs suffer when it comes to scaling, as their complexity increases exponentially with the increase in the number of inputs. Due to this reason, LUTs with more than 6 inputs have rarely been used. In an attempt to handle more inputs and increase the logical density of logic cells, {And-Inverter Cones (AICs)}, shown in figure below were proposed. These are an alternative for LUTs with a better compromise between hardware complexity, flexibility, delay and input and output counts. These are inspired by modern logic synthesis approaches which employ and-inverter graphs (AIGs) for representing logic networks. AIC is a binary tree which consists of AND nodes with programmable conditional inversion and offers tapping of intermediate results. AICs have a lot to offer as compared to the traditional LUT based FPGAs. The following points summarize the major benefits of using AICs over LUTs:

  1. For a given complexity, AICs can implement a function with more number of inputs compared to an LUT.
  2. Since it is inspired by AIGs, area and delay increase linearly and logarithmically respectively with the number of inputs which is in contrast to the respective exponential and linear increase in case of LUTs.
  3. Intermediate results can be tapped out of AICs thereby reducing logic duplication. 

While on one hand, we are sacrificing on the flexibility offered by FPGAs, there are certain new nanotechnologies based on materials like germanium and silicon which offers runtime-reconfigurability and functional symmetry between p and n-type behavior. The project aims to explore the FPGA architecture using these reconfigurable nanotechnologies in the context of AICs.

Skills acquired in this thesis:

  • Hands-on skills using Linux based systems
  • Programming in Python or C/C++
  • Working with tools like Cadence virtuoso environment and open source VTR (verilog-to-routing) tool for FPGAs
  • Problem analysis
  • Working in an international environment and communicating in English
  • Professional technical writing
  • Verilog/VHDL

Pre-Requisites:

  • Knowledge of FPGAs
  • Familiar with Linux environment, C or C++.

Contact Information:

Customizing Approximate Arithmetic Blocks for FPGA

 

Approximate Computing has emerged as a new paradigm for building highly power-efficient on-chip systems. The implicit assumption of most of the standard low-power techniques was based on precise computing, i.e., the underlying hardware provides accurate results. However, continuing to support precise computing is most likely not a way to solve upcoming power-efficiency challenges. Approximate Computing relaxes the bounds of precise computation, thereby providing new opportunities for power savings and may bear orders of magnitude in performance/power benefits. Recent research studies by several companies (like Intel, IBM, and Microsoft), and research groups have demonstrated that applying hardware approximations may provide 4x-60x energy reductions. These research studies have shown that there is a large body of power-hungry applications from several domains like image and video processing, computer vision, Big Data, and Recognition, Mining and Synthesis (RMS), which are amenable to approximate computing due to their inherent resilience to approximation errors and can still produce output of acceptable quality. State-of-the-art has developed approximate hardware designs to perform a computation approximation, with certain basic hardware blocks. For example, approximate designs only for the Ripple Carry Adders which has higher potential for the approximation, but ignoring the other types of widely used adders like: Kogge Stone adder, Carry look ahead, and Carry Sum, Carry Save adder.

 

 

Goals of this Thesis and Potential Tasks (Contact for more Discussion):

  • Developing an approximate FPGA-specific library for different arithmetic modules like adders, multipliers, dividers, and logical operations.
  • Developing complex multi-bit approximate functions and accelerators for FPGAs.
  • Interfacing custom instructions and FPGAs to soft cores (e.g. Microblaze) and using SDSoC Framework.
  • Developing functionally equivalent software models, e.g., using C or C++ and testing in different benchmark applications.
  • Open-sourcing and Documentation.

 

Skills acquired in this Thesis:

  • Hands-on experience on FPGA development and new SDSoC framework.
  • Computer Arithmetic and Hardware Optimization.
  • In-depth technical knowledge on the cutting-edge research topic and emerging computing paradigms.
  • Problem analysis and exploration of novel solutions for real-world problems.
  • Open-Sourcing.
  • Team work and experience in an international environment.
  • Professional grade technical writing.

 

Pre-Requisite (But not fully required!):

  • Knowledge of Computer architecture, C or C++ or MATLAB.
  • VHDL programming (beneficial if known and practiced in some labs)

 

Contact information:

Approximate Image Processing on FPGA

Currently Running

Image and video processing applications are well-known for their processing and power-hungry nature. Therefore, low-power implementation on such applications on resource-constrained devices poses several challenges. Approximate Computing has emerged as a new paradigm for building highly power-efficient on-chip systems. The implicit assumption of most of the standard low-power techniques was based on precise computing, i.e., the underlying hardware provides accurate results. However, continuing to support precise computing is most likely not a way to solve upcoming power-efficiency challenges. Approximate Computing relaxes the bounds of precise computation, thereby providing new opportunities for power savings and may bear orders of magnitude in performance/power benefits. Recent research studies by several companies (like Intel, IBM, and Microsoft), and research groups have demonstrated that applying hardware approximations may provide 4x-60x energy reductions. These research studies have shown that there is a large body of power-hungry applications from several domains like image and video processing, computer vision, Big Data, and Recognition, Mining and Synthesis (RMS), which are amenable to approximate computing due to their inherent resilience to approximation errors and can still produce output of acceptable quality.

 

Goals of this Thesis and Potential Tasks (Contact for more Discussion):

  • Developing image processing algorithms, like pre-processing and post-processing filters, edge detection, line detection, object detection, face recognition, and motion tracking.
  • Hardware development, testing, performance and power analysis for FPGA-based systems.
  • Research and Development of novel computation and data approximation techniques.

 

Skills acquired in this Thesis:

  • Hands-on experience on FPGA development.
  • In-depth technical knowledge on the cutting-edge research topic and emerging computing paradigms.
  • Problem analysis and exploration of novel solutions for real-world problems.
  • Team work and experience in an international environment.
  • Professional grade technical writing.

 

Pre-Requisite:

  • Knowledge of Computer architecture.
  • VHDL programming (beneficial if known and practiced in some labs)

 

Helpful Skills:

  • Knowledge about image processing algorithms

 

Contact information:

Whiteboard Ink Reader

White boards are a useful tool for teaching, brain storming and note-taking for collaborative work.  Often, ideas are created, elaborated and dismissed in quick succession. Sometimes it is desirable to have access to the products of this process, maybe for creating slides, or reiterating Ideas later-on.  Current solutions are mostly focused on fully digital whiteboard implementations, where digitizers are used to create a projected image. These systems are not only expensive, but influence the way in which the whiteboard is used.  For example, it is no longer possible to simply wipe away parts of the image with the fingertip.  In this project, we look for a solution which allows the use of commonly available components to create a system that can capture whiteboard content as it is created, in a format suitable for storage and further processing, such as shape recognition.

Goals of this Thesis

  • Use one or more image capture devices (e.g. smart phones, webcams) to record whiteboard content incrementally
  • Combine and analyze images, centrally or distributed (e.g. smart phone, Raspberry Pi, FPGA Board)
  • Store result in a format suitable for further processing and presentation

 

Potential Tasks

  • Research strengths and weaknesses of existing solutions
  • Smart Phone App development
  • Image processing for FPGA/GPU

 

Pre-Requisites

  • Embedded Development experience (e.g. Raspberry Pi, Microcontroller Development)
  • Image processing fundamentals

 

Helpful Skills

  • FPGA development, preferably VHDL
  • Smart phone app development experience

 

Contact information for more details:

References

  • He, Liu, Zhang, "Why take notes? Use the whiteboard capture system", in: 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03), 2003
  • He, Zhang, "Real-time whiteboard capture and processing using a video camera for teleconferencing", in: /Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing/, 2005., 2005