Thursday, August 30, 2012

Paper Reading #1: KinectFusion, Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera

Reference Information

Shahram Izadi, David Kim, et al. (2011) "KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera." KinectFusion. Bibliometrics, 16 Oct. 2011. Web. 30 Aug. 2012.

Authors

Shahram Izadi - Coleads the interactive 3D technologies group at MSR Cambridge..
David Kim - Member of Digital interaction Group at Culture Lab in MSR Cambridge
Otmar Hilliges - Post doc researcher at Sensors and Devices Group at Microsft Research.
David Molyneaux - Intern with the SenDev team in MSRC studying software engineering.
Richard Newcombe - Works at MSR Cambridge in the Computer Vision Group.
Pushmeet Kohli - Researcher for Microsoft researching intelligent machines.
Jamie Shotton - Researcher at MSR Cambridge in the Algorithms Computer Vision Group.
Steve Hodges - Leads Sensors and Devices at MSR Cambridge on accessories.
Dustin Freeman - Working on a Ph.D. in Computer Science at University of Toronto.
Andrew Davison - Leads Robot Vision Research Group and teaches at Imperial College.
Andrew Fitzgibbon - Resraches for MSR Cambridge on Computer Vision and Videos.


Article Summary

KinectFusion Project Page
Photo Credit: KinectFusion - (http://research.microsoft.com/en-us/projects/surfacerecon)


The article on KinectFusion discusses applications of using real-time hardware to recreate a virtual environment. Microsoft's Kinect, the main piece of hardware involved in this article, has a defect previously mentioned in this article. KinectFusion is an attempt by the author(s) to fix those errors and create a 3D model of the image being recorded from the data. In addition to using the hardware's passive cameras, online images, and tracking info, the authors also develop a algorithm for the hardware that brings environment attributes to play. The previous addition allows for an environment to be effectively controlled and mapped. Because of the limitations of the Kinect hardware, the author(s) had to extend the graphics pipeline to allow for an increase in interactivity.

In the end, the author(s) were able to create a almost real-time virtual environment interaction with a piece of hardware initially designed to only interpolate static data. With the additions they added, they shown it was possible to combine real-world environments with a virtual recreation. Humans and Technology's interaction can be easily seen in this example.

Related Work

When searching for topics related to this paper, I found many results describing what the KinectFusion design does. When analyzing their approach, they all seemed to be intrigued at the concept of being able to create a virtual mapping of an environment in real-time. They described Kinect's flaws just as the authors above had done and told what KinectFusion accomplishes by it's re-design. They talked about other design attempts by other authors but in the end praised KinectFusion for providing a more robust feature set. A Majority of the papers were titled the same as the paper and were featured on Microsoft's Research website. There were even YouTube videos called KinectFusion HQ and Real-Time KinectFusion that explained how the author's concept works.

Evaluation

I feel this work was evaluated unbiasedly and used low-level systematic approaches to redesign the hardware. They first talked about their problem with their current hardware and brought up resolutions. They measured it using quantitative descriptions, describing each piece of hardware inside their design and systematically combining all the parts into one to create a virtual environment. The way they described it was hard to understand, but was worth while.

Discussion

This paper is important because it shows that humans and technology can interact with each other in an almost real-time manner. Innovation on pre-existing concepts shows that there is still much that can be accomplished and technology is truly not at a stand-still.

The authors began by pointing out the flaws of Microsoft's Kinect and began to explain how they tried to solve them. By programming at a level deep enough to extend the GPU pipeline, they allowed for true innovation and helped their product design flourish.

This article shows me that being able to understand technology entirely at a low level is important to truly innovate a product. While hard to understand, I feel the evaluation was appropriate given by the authors.

Wednesday, August 29, 2012

Blog Entry #0 : The Introduction

Hi! I'm Zack and this is my introductory blog for CSCE 436! This course is about computer and human interactions and i'm sure more blogs will follow. Let's Interact! This is me below...


Contact E-Mail: crewxp@gmail.com
Class Standing: (5th year Senior)

I'm taking this class because the topic seemed really interesting to me. Learning how technology and humans interact with each other on a deeper level will most likely help me with my future goals in life. I come to this class with a long-term technology-tinkering history and always had my own ideas in mind on how technology should be shown to people.

Eventually in life I want to have a stable career and still be able to pursue my own hobbies on the side. I have deep passions and would like to have a job where I make a difference to people. From starting a new technology trend to finding bad guys with my own technological skills, I really want to matter.

On the side I love tinkering with music and creating contraptions around my house to make my own life easier. Being able to pursue a hobby in entertainment or music would allow me to express my creative side and really makes me happy.

After I graduate, taking part in either interest would really be something I would like. I really want to start my own company, but I am worried it might not allow me to pursue my own hobbies. 10 years later I would hope to have a wife, eventually kids, and still be able to pursue my current interests. Having the same goals that far down as now would be divine.

If I had to guess of the next technological advancement in computer science, I would say it would have something to do with simplifying life. I've found people enjoy simplicity and the more technology advances, the easier it becomes to use. Maybe something that everyone can use every day would be the next big thing. Maybe a home-AI?

If I could travel back in time, I would try and meet Einstein and ask his values and goals. Having such a chaotic life with so many mishaps, yet still eventually having such a big influence on the world really amazes me. I want to be able to understand his views and relate them to my own.

If I could be fluent in a language not my own, I would pick Japanese. I wouldn't mind moving there as the technology there and culture really interests me. Lots of people there are creative and driven. Being in such an environment would only help my eventual goals.

A fact about myself is that I have actually talked to real life composers thanks to social networking. I'm Facebook friends with a few and a few had even commented on pieces of work I had posted. What they had to say about what I do really inspired me.