Krystle Hewitt / en Light in a bottle: 鶹Ƶ researchers use AI to capture photons in motion /news/light-bottle-u-t-researchers-use-ai-capture-photons-motion <span class="field field--name-title field--type-string field--label-hidden">Light in a bottle: 鶹Ƶ researchers use AI to capture photons in motion</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2024-11/vlcsnap-2024-11-19-09h57m28s106-crop.jpg?h=c6612aec&amp;itok=HTP8JxLv 370w, /sites/default/files/styles/news_banner_740/public/2024-11/vlcsnap-2024-11-19-09h57m28s106-crop.jpg?h=c6612aec&amp;itok=Hjp8Z3F8 740w, /sites/default/files/styles/news_banner_1110/public/2024-11/vlcsnap-2024-11-19-09h57m28s106-crop.jpg?h=c6612aec&amp;itok=R4eD09Aa 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2024-11/vlcsnap-2024-11-19-09h57m28s106-crop.jpg?h=c6612aec&amp;itok=HTP8JxLv" alt="a video still showing a photon of light passing through a water-filled coke bottle"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2024-11-19T10:07:26-05:00" title="Tuesday, November 19, 2024 - 10:07" class="datetime">Tue, 11/19/2024 - 10:07</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><meta charset="UTF-8"><em>A scene rendered using videos from an ultra-high-speed camera shows a pulse of light travelling through a pop bottle, scattering off liquid, hitting the ground, focusing on the cap and reflecting back&nbsp;(supplied image)</em></p> <p><meta charset="UTF-8"></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/krystle-hewitt" hreflang="en">Krystle Hewitt</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/breaking-research" hreflang="en">Breaking Research</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/graduate-students" hreflang="en">Graduate Students</a></div> <div class="field__item"><a href="/news/tags/research-innovation" hreflang="en">Research &amp; Innovation</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">A novel AI algorithm simulates what an ultra-fast scene –&nbsp;such as a pulse of light speeding through a pop bottle – would look like from any vantage point</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>Close your eyes and picture the iconic “bullet time” scene from <em>The Matrix</em> – the one where Neo, played by Keanu Reeves, dodges bullets in slow motion.&nbsp;&nbsp;Now imagine being able to witness the same effect, but instead of speeding bullets, you’re watching something that moves one million times faster: light itself.&nbsp;</p> <p>Computer scientists from the University of Toronto have built an advanced camera setup that can visualize light in motion from any perspective, opening avenues for further inquiry into new types of 3D sensing techniques.&nbsp;</p> <p>The researchers developed a sophisticated AI algorithm that can simulate what an ultra-fast scene –&nbsp;a pulse of light speeding through a pop bottle or bouncing off a mirror – would look like from any vantage point.</p> <figure role="group" class="caption caption-drupal-media align-left"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/2024-11/David-Lindell_sm-crop.jpg" width="300" height="301" alt="&quot;&quot;"> </div> </div> <figcaption><em>David Lindell (supplied image)</em></figcaption> </figure> <p><strong>David Lindell</strong>, an assistant professor in the department of computer science in the Faculty of Arts &amp; Science, says the feat requires the ability to generate videos where the camera appears to “fly” alongside the very photons of light as they travel.</p> <p>“Our technology can capture and visualize the actual propagation of light with the same dramatic, slowed-down detail,” says Lindell. “We get a glimpse of the world at speed-of-light timescales that are normally invisible.”</p> <p>The researchers believe the approach,<a href="https://anaghmalik.com/FlyingWithPhotons/" target="_blank"> which was recently presented at the 2024 European Conference on Computer Vision</a>, can unlock new capabilities in several important research areas, including: advanced sensing capabilities such as non-line-of-sight imaging, a method that allows viewers to “see” around corners or behind obstacles using multiple bounces of light; imaging through scattering media, such as fog, smoke, biological tissues or turbid water; and 3D reconstruction, where understanding the behaviour of light that scatters multiple times is critical.&nbsp;</p> <p>In addition to Lindell, the research team included 鶹Ƶ computer science PhD student <strong>Anagh Malik</strong>, fourth-year engineering science undergraduate <strong>Noah Juravsky</strong>, Professor <strong>Kyros Kutulakos </strong>and Stanford University Associate Professor<strong>&nbsp;Gordon Wetzstein&nbsp;</strong>and PhD student <strong>Ryan Po</strong>.</p> <p>The researchers’ key innovation lies in the AI algorithm they developed to visualize ultrafast videos from any viewpoint –&nbsp;a challenge known in computer vision as “novel view synthesis.”&nbsp;</p> <div class="align-center"> <div class="field field--name-field-media-oembed-video field--type-string field--label-hidden field__item"><iframe src="/media/oembed?url=https%3A//youtu.be/BtQV-KO8VCQ%3Fsi%3DHiw8kO2npjW1CGM-&amp;max_width=0&amp;max_height=0&amp;hash=sLd0aV6MNLAKB2V9PYlWW1yI7K7QqK1UFhHoca0D0dk" width="200" height="113" class="media-oembed-content" loading="eager" title="Flying with Photons: Rendering Novel Views of Propagating Light"></iframe> </div> </div> <p>Traditionally, novel view synthesis methods are designed for images or videos captured with regular cameras. However, the researchers extended this concept to handle data captured by an ultra-fast camera operating at speeds comparable to light, which posed unique challenges – including the need for their algorithm to account for the speed of light and model how it propagates through a scene.&nbsp;</p> <p>Through their work, researchers observed a moving-camera visualization of light in motion, including refracting through water, bouncing off a mirror or scattering off a surface. They also demonstrated how to visualize phenomena that only occur at a significant portion of the speed of light, as predicted by Albert Einstein. For example, they visualize the “searchlight effect” that makes objects brighter when moving toward an observer, and “length contraction,” where fast-moving objects look shorter in the direction they are travelling. The researchers were also able to create a way to see how objects would appear to contract in length when moving at such high speeds.&nbsp;</p> <figure role="group" class="caption caption-drupal-media align-right"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/2024-11/Anagh-Malik_sm2-crop.jpg" width="300" height="300" alt="&quot;&quot;"> </div> </div> <figcaption><em>&nbsp; &nbsp; Anagh Malik (supplied image)</em></figcaption> </figure> <p>While current algorithms for processing ultra-fast videos typically focus on analyzing a single video from a single viewpoint, the researchers say their work is the first to extend this analysis to multi-view light-in-flight videos, allowing for the study of how light propagates from multiple perspectives. &nbsp;</p> <p>“Our multi-view light-in-flight videos serve as a powerful educational tool, offering a unique way to teach the physics of light transport,” says Malik. “By visually capturing how light behaves in real-time – whether refracting through a material or reflecting off a surface – we can get a more intuitive understanding of the motion of light through a scene.</p> <p>“Additionally, our technology could inspire creative applications in the arts, such as filmmaking or interactive installations, where the beauty of light transport can be used to create new types of visual effects or immersive experiences.”&nbsp;</p> <p>The research also holds significant potential for improving LIDAR (Light Detection and Ranging) sensor technology used in autonomous vehicles. Typically, these sensors process data to immediately create 3D images right away. But the researchers’ work suggests the potential to store the raw data, including detailed light patterns, to help create systems that perform better than conventional LIDAR to see more details, look through obstacles and understand materials better.&nbsp;</p> <p>While the researchers’&nbsp;project focused on visualizing how light moves through a scene from any direction, they note that carries “hidden information” about the shape and appearance of everything it touches. As the researchers look to their next steps, they want to unlock this information by developing a method that uses multi-view light-in-flight videos to reconstruct the 3D geometry and appearance of the entire scene.&nbsp;</p> <p>“This means we could potentially create incredibly detailed, three-dimensional models of objects and environments – just by watching how light travels through them,” Lindell says.&nbsp;</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Tue, 19 Nov 2024 15:07:26 +0000 Christopher.Sorensen 310650 at 鶹Ƶ initiative encourages computer science students to incorporate ethics into their work /news/u-t-initiative-encourages-computer-science-students-incorporate-ethics-their-work <span class="field field--name-title field--type-string field--label-hidden">鶹Ƶ initiative encourages computer science students to incorporate ethics into their work</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2024-04/GettyImages-1063724556-crop.jpg?h=81d682ee&amp;itok=8N5uArHf 370w, /sites/default/files/styles/news_banner_740/public/2024-04/GettyImages-1063724556-crop.jpg?h=81d682ee&amp;itok=SuP6_Tgs 740w, /sites/default/files/styles/news_banner_1110/public/2024-04/GettyImages-1063724556-crop.jpg?h=81d682ee&amp;itok=bU01W_QA 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2024-04/GettyImages-1063724556-crop.jpg?h=81d682ee&amp;itok=8N5uArHf" alt="a woman sits in a computer science classroom"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2024-04-26T15:22:07-04:00" title="Friday, April 26, 2024 - 15:22" class="datetime">Fri, 04/26/2024 - 15:22</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><em>(photo by urbazon/Getty Images)</em></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/krystle-hewitt" hreflang="en">Krystle Hewitt</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">Total enrolment in courses featuring Embedded Ethics Education Initiative modules exceeded 8,000 students this year</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>Computer science students at the University of Toronto are learning how to incorporate ethical considerations into the design and development of new technologies such as artificial intelligence with the help of a unique undergraduate initiative.</p> <p>The <a href="https://www.cs.toronto.edu/embedded-ethics/">Embedded Ethics Education Initiative</a> (E3I) aims to provide students with the ability to critically assess the societal impacts of the technologies they will be designing and developing throughout their careers. That includes grappling with issues such as AI safety, data privacy and misinformation.</p> <p>Program co-creator<strong> Sheila McIlraith</strong>, a professor in the department of computer science in the Faculty of Arts &amp; Science and an associate director at the <a href="http://srinstitute.utoronto.ca">Schwartz Reisman Institute for Technology and Society</a>&nbsp;(SRI), says E3I aims to help students “recognize the broader ramifications of the technology they’re developing on diverse stakeholders, and to avoid or mitigate any negative impact.”&nbsp;</p> <p>First launched in 2020 as a two-year pilot program, the initiative is a collaborative venture between the&nbsp;department of computer science and SRI in association with the&nbsp;department of philosophy. It integrates ethics modules into select undergraduate computer science courses – and has reached thousands of 鶹Ƶ students in this academic year alone.&nbsp;</p> <p><strong>Malaikah Hussain</strong> is one of the many 鶹Ƶ students who has benefited from the initiative. As a first-year student enrolled in <a href="https://artsci.calendar.utoronto.ca/course/csc111h1">CSC111: Foundations of Computer Science II</a>, she participated in an E3I module that explored how a data structure she learned about in class laid the foundation of a contact tracing system and raised ethical issues concerning data collection. &nbsp;</p> <p>“The modules underlined how the software design choices we make extend beyond computing efficiency concerns to grave ethical concerns such as privacy,” says Hussain, who is now a third-year computer science specialist. &nbsp;&nbsp;</p> <p>Hussain adds that the modules propelled her interest in ethics and computing, leading her to pursue upper year courses on the topic. During a subsequent internship, she organized an event about the ethics surrounding e-waste disposal and the company’s technology life cycle. &nbsp;</p> <p>“The E3I modules have been crucial in shaping my approach to my studies and work, emphasizing the importance of ethics in every aspect of computing,” she says. &nbsp;</p> <p>The program, which initially reached 400 students, has seen significant growth over the last four years. This academic year alone, total enrolment in&nbsp;computer science&nbsp;courses with E3I programming has exceeded 8,000 students. Another 1,500&nbsp;students participated in E3I programming in courses outside computer science.&nbsp;</p> <figure role="group" class="caption caption-drupal-media align-left"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/2024-04/techdesign-lead.jpg" width="370" height="270" alt="&quot;&quot;"> </div> </div> <figcaption><em>Clockwise from top left: Steven Coyne, Diane Horton, David Liu and Sheila McIlraith&nbsp;(supplied images)</em></figcaption> </figure> <p>In recognition of the program’s impact on the undergraduate student learning experience,&nbsp;McIlraith and her colleagues&nbsp;–&nbsp;<strong>Diane Horton </strong>and&nbsp;<strong>David Liu</strong>, a professor and associate professor, teaching stream, respectively, in the department of computer science,<strong>&nbsp;</strong>and <strong>Steven Coyne</strong>, an assistant professor who is jointly appointed to the departments of computer science and philosophy<b> </b>–&nbsp;were recently recognized with the <a href="https://alumni.utoronto.ca/events-and-programs/awards/awex/northrop-frye-awards">2024 Northrop Frye Award (Team)</a>, one of the prestigious 鶹Ƶ 鶹Ƶ Association Awards of Excellence. &nbsp;&nbsp;&nbsp;</p> <p>Horton, who leads the initiative’s assessment efforts, points to the team’s <a href="https://dl.acm.org/doi/abs/10.1145/3626252.3630834" target="_blank">recently published paper</a> showing that after participating in modules in only one or two courses, students are inspired to learn more about ethics and are benefiting in the workplace. &nbsp;</p> <p>“We have evidence that they are better able to identify ethical issues arising in their work, and that the modules help them navigate those issues,” she says.&nbsp;</p> <p>Horton adds that the findings build on <a href="https://dl.acm.org/doi/abs/10.1145/3478431.3499407">earlier assessment work</a> showing that after experiencing modules in only one course, students became more interested in ethics and tech, and more confident in their ability to deal with ethical issues they might encounter. &nbsp;</p> <p>The team says the initiative’s interdisciplinary nature is key to delivering both a curriculum and experience with an authentic voice, giving instructors and students the vocabulary and depth of knowledge to engage on issues such as privacy, well-being and harm. &nbsp;</p> <p>“As a philosopher and ethicist, I love teaching in a computer science department,” says Coyne. “My colleagues teach me about interesting ethical problems that they’ve found in their class material, and I get to reciprocate by finding distinctions and ideas that illuminate those problems. And we learn a lot from each other – intellectually and pedagogically – when we design a module for that class together.” &nbsp;&nbsp;</p> <p>E3I is founded upon three key principles: teach students how – not what – to think; encourage ethics-informed design choices as a design principle; and make discussions safe, not personal. &nbsp;</p> <p>“Engaging with students and making them feel safe, not proselytizing, inviting the students to participate is especially important,” says Liu. &nbsp;</p> <p>The modules support this type of learning environment by using stakeholders with fictional character profiles that include names, pictures and a backstory. &nbsp;</p> <p>“Fictional stakeholders help add a layer of distance so students can think through the issues without having to say, ‘This is what I think,’” Horton says.&nbsp;“Stakeholders also increase their awareness of the different kinds of people who might be impacted.” &nbsp;</p> <p>McIlraith adds that having students advocate for an opinion that is not necessarily their own encourages empathy, while Liu notes that many have a “real hunger” to learn about the ethical considerations of their work.&nbsp;</p> <p>“An increasing number of students are thinking, ‘I want to be trained as a computer scientist and I want to use my skills after graduation,’ but also ‘I want to do something that I think will make a positive impact on the world,’” he says. &nbsp;&nbsp;</p> <p>Together, the E3I team works with course instructors to develop educational modules that tightly pair ethical concepts with course-specific technical material. In an applied software design course, for example, students learn about accessible software and disability theory; in a theoretical algorithms course, they learn about algorithmic fairness and distributive justice; and in a game design course, they learn about addiction and consent. &nbsp;</p> <p><strong>Steve Engels</strong>, a computer science professor, teaching stream, says integrating an ethics module about addiction into his fourth-year capstone course on video game design felt like a natural extension of his lecture topic on ludology – in particular, the psychological techniques used to make games compelling – instead of something that felt artificially inserted into the course. &nbsp;&nbsp;</p> <p>“Project-based courses can sometimes compel students to focus primarily on the final product of the course, but this module provided an opportunity to pause and reflect on what they were doing and why,” Engels says. “It forced them to confront their role in the important and current issue of gaming addiction, so they would be more aware of the ethical implications of their future work and thus be better equipped to handle it.” &nbsp;</p> <p>By next year, each undergraduate computer science student will encounter E3I modules in at least one or two courses every year throughout their program. The team is also exploring the adoption of the E3I model in other STEM disciplines, from ecology to statistics. Beyond 鶹Ƶ, the team plans to share their expertise with other Canadian universities that are interested in developing a similar program.&nbsp;</p> <p>“This initiative is having a huge impact,” McIlraith says. “You see it in the number of students we’re reaching and in our assessment results. But it’s more than that – we’re instigating a culture change.”</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Fri, 26 Apr 2024 19:22:07 +0000 Christopher.Sorensen 307643 at 鶹Ƶ researchers develop video camera that captures 'huge range of timescales' /news/u-t-researchers-develop-video-camera-captures-huge-range-timescales <span class="field field--name-title field--type-string field--label-hidden">鶹Ƶ researchers develop video camera that captures 'huge range of timescales'</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2023-11/20230927_Computational-imaging-researchers_01-crop.jpg?h=afdc3185&amp;itok=R5ZCIQQk 370w, /sites/default/files/styles/news_banner_740/public/2023-11/20230927_Computational-imaging-researchers_01-crop.jpg?h=afdc3185&amp;itok=JGfW1BQd 740w, /sites/default/files/styles/news_banner_1110/public/2023-11/20230927_Computational-imaging-researchers_01-crop.jpg?h=afdc3185&amp;itok=aH-hvKkC 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2023-11/20230927_Computational-imaging-researchers_01-crop.jpg?h=afdc3185&amp;itok=R5ZCIQQk" alt="&quot;&quot;"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2023-11-13T10:12:10-05:00" title="Monday, November 13, 2023 - 10:12" class="datetime">Mon, 11/13/2023 - 10:12</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><em>Researchers<strong> Sotiris Nousias</strong> and <strong>Mian Wei</strong> work on an experimental setup that uses a specialized camera and an imaging technique that timestamps individual particles of light to replay video across large timescales&nbsp;(photo by Matt Hintsa)</em></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/krystle-hewitt" hreflang="en">Krystle Hewitt</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/breaking-research" hreflang="en">Breaking Research</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/alumni" hreflang="en">鶹Ƶ</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-applied-science-engineering" hreflang="en">Faculty of Applied Science &amp; Engineering</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/graduate-students" hreflang="en">Graduate Students</a></div> <div class="field__item"><a href="/news/tags/research-innovation" hreflang="en">Research &amp; Innovation</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">“Our work introduces a unique camera capable of capturing videos that can be replayed at speeds ranging from the standard 30 frames per second to hundreds of billions of frames per second”</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>Computational imaging researchers at the University of Toronto have built a camera that can capture everything from light bouncing off a mirror to a ball bouncing on a basketball court – all in a single take.</p> <p>Dubbed by one researcher as a “microscope for time,” <a href="http://he imaging technique">the imaging technique</a> could lead to improvements in everything from medical imaging to the LIDAR (Light Detection and Ranging) technologies used in mobile phones and self-driving cars.</p> <p>“Our work introduces a unique camera capable of capturing videos that can be replayed at speeds ranging from the standard 30 frames per second to hundreds of billions of frames per second,” says&nbsp;<strong>Sotiris Nousias</strong>, a post-doctoral researcher who is working with <strong>Kyros Kutulakos</strong>, a professor of computer science in the Faculty of Arts &amp; Science.&nbsp;</p> <p>“With this technology, you no longer need to predetermine the speed at which you want to capture the world.”</p> <p>The research by members of the Toronto Computational Imaging Group&nbsp;– including computer science PhD student&nbsp;<strong>Mian Wei</strong>, electrical and computer engineering alumnus <strong>Rahul Gulve </strong>and&nbsp;<strong>David Lindell</strong>, an assistant professor of computer science&nbsp;–&nbsp;was recently presented at the&nbsp;2023 International Conference on Computer Vision, where it received one of two&nbsp;best paper awards.</p> <p>“Our camera is fast enough to even let us see light moving through a scene,” Wei says. “This type of slow and fast imaging where we can capture video across such a huge range of timescales has never been done before.”</p> <p>Wei compares the approach to combining the various video modes on a smartphone: slow motion, normal video and time lapse.</p> <p>“In our case, our camera has just one recording mode that records all timescales simultaneously and then, afterwards, we can decide [what we want to view],” he says. “We can see every single timescale because if something’s moving too fast, we can zoom in to that timescale, if something’s moving too slow, we can zoom out and see that, too.”</p> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2023-11/group-photo.jpg?itok=2nAmr_Hq" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>Postdoctoral researcher&nbsp;<strong>Sotiris Nousias,&nbsp;</strong>PhD student <strong>Mian Wei,&nbsp;</strong>Assistant Professor <strong>David Lindell </strong>and<br> Professor <strong>Kyros Kutulakos </strong>(photos supplied)</em></figcaption> </figure> <p>While conventional high-speed cameras can record video up to around one million frames per second without a dedicated light source – fast enough to capture videos of a speeding bullet – they are too slow to capture the movement of light.</p> <p>The researchers say capturing an&nbsp;image much faster than a speeding bullet without a synchronized light source such as strobe light or a laser creates a challenge because very little light is collected during such a short exposure period&nbsp;– and a significant amount of light is needed to form an image.</p> <p>To overcome these issues, the research team used a special type of ultra-sensitive sensor called a free-running single-photon avalanche diode (SPAD). The sensor operates by time-stamping the arrival of individual photons (particles of light) with precision down to trillionths of a second. To recover a video, they use a computational algorithm that analyzes when the photons arrive and estimates how much light is incident on the sensor for any given instant in time, regardless of whether that light came from room lights, sunlight or from lasers operating nearby.</p> <p>Reconstructing and playing back a video is a matter of retrieving the light levels corresponding to each video frame.</p> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2023-11/photons-figure.jpg?itok=L74Lqvv4" width="750" height="424" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>The researchers “passive ultra-wideband imaging” approach&nbsp; uses a set of timestamps to detect the arrival of individual photons.</em></figcaption> </figure> <p>The researchers refer to the novel approach as “passive ultra-wideband imaging” that enables post-capture refocusing in time – from transient to everyday timescales.</p> <p>“You don’t need to know what happens in the scene, or what light sources are there. You can record information and you can refocus on whatever phenomena or whatever timescale you want,” Nousias explains.</p> <p>Using an experimental setup that employed multiple external light sources and a spinning fan, the team demonstrated their method’s ability to allow for post-capture timescale selection. In their demonstration, they used photon timestamp data captured by a free-running SPAD camera to play back video of a rapidly spinning fan at both 1,000 frames per second and 250 billion frames per second.</p> <p><iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen frameborder="0" height="422" loading="lazy" src="https://www.youtube.com/embed/StVUUAL7CxI?si=Eeeue483SWmnvKpD" title="YouTube video player" width="100%"></iframe></p> <p>The technology could have myriad applications.</p> <p>“In biomedical imaging, you might want to be able to image across a huge range of timescales at which biological phenomena occur. For example, protein folding and binding happen across timescales from nanoseconds to milliseconds,” says Lindell. “In other applications, like mechanical inspection, maybe you’d like to image an engine or a turbine for many minutes or hours and then after collecting the data, zoom in to a timescale where an unexpected anomaly or failure occurs.”</p> <p>In the case of self-driving cars, each vehicle may use an active imaging system like LIDAR to emit light pulses that can create potential interference with other systems on the road. However, the researchers say their technology could “turn this problem on its head” by capturing and using ambient photons. For example, they say it might be possible to create universal light sources that any car, robot or smartphone can use without requiring the explicit synchronization that is needed by today’s LIDAR systems.</p> <p>Astronomy is another area that could see potential imaging advancements&nbsp;– including when it comes to studying phenomena such as fast radio bursts.</p> <p>“Currently, there is a strong focus on pinpointing the optical counterparts of these fast radio bursts more precisely in their host galaxies. This is where the techniques developed by this group, particularly their innovative use of SPAD cameras, can be valuable,” says&nbsp;<strong>Suresh Sivanandam</strong>, interim director of the Dunlap Institute for Astronomy &amp; Astrophysics and associate professor at the David A. Dunlap department of astronomy and astrophysics.</p> <p>The researchers say that while the sensors that have the capability to timestamp photons already exist – it’s an emerging technology that’s been deployed on iPhones in their LIDAR and their proximity sensor&nbsp;– no one has used the photon timestamps in this way to enable this type of ultra-wideband, single-photon imaging.</p> <p>“What we provide is a microscope for time,”&nbsp;Kutulakos says. “So, with the camera you record everything that happened and then you can go in and observe the world at imperceptibly fast timescales.</p> <p>“Such capability can open up a new understanding of nature and the world around us."</p> <p>&nbsp;</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Mon, 13 Nov 2023 15:12:10 +0000 Christopher.Sorensen 304345 at Researchers find similarities in the way both children and societies alter words' meanings /news/researchers-find-similarities-way-both-children-and-societies-alter-words-meanings <span class="field field--name-title field--type-string field--label-hidden">Researchers find similarities in the way both children and societies alter words' meanings</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2023-08/GettyImages-1209896821-crop.jpg?h=afdc3185&amp;itok=G9nVSN0k 370w, /sites/default/files/styles/news_banner_740/public/2023-08/GettyImages-1209896821-crop.jpg?h=afdc3185&amp;itok=U6qWiFFM 740w, /sites/default/files/styles/news_banner_1110/public/2023-08/GettyImages-1209896821-crop.jpg?h=afdc3185&amp;itok=neuK_flc 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2023-08/GettyImages-1209896821-crop.jpg?h=afdc3185&amp;itok=G9nVSN0k" alt="a young boy speaks to his mother"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2023-08-22T09:22:31-04:00" title="Tuesday, August 22, 2023 - 09:22" class="datetime">Tue, 08/22/2023 - 09:22</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><em>(photo by Steve Debenport/Getty Images)</em></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/krystle-hewitt" hreflang="en">Krystle Hewitt</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/breaking-research" hreflang="en">Breaking Research</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/research-innovation" hreflang="en">Research &amp; Innovation</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">"Our hypothesis is that these processes are fundamentally the same"</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>An international team of researchers is using computer science to explore the knowledge foundation of word meaning in both child language development and the evolution of word meanings across languages.</p> <p>Through a computational framework they developed, the researchers show how patterns of children’s language innovation can be used to predict patterns of language evolution, and vice versa.</p> <p>The interdisciplinary work by University of Toronto computer science researcher <strong>Yang Xu</strong>&nbsp;and computational linguistics and cognitive science researchers from&nbsp;Universitat Pompeu Fabra&nbsp;and&nbsp;ICREA&nbsp;in Spain, was <a href="https://www.science.org/doi/10.1126/science.ade7981">recently published in the journal <em>Science</em></a>.</p> <figure role="group" class="caption caption-drupal-media align-left"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/2023-08/yang%2B400%2Bx%2B600.jpg" width="300" height="450" alt="&quot;&quot;"> </div> </div> <figcaption><em>Associate Professor Yang Xu (photo by Matt Hintsa)</em></figcaption> </figure> <p>In the paper, the team investigates what is known as word meaning extension,&nbsp;which is the creative use of known words to express novel meanings.</p> <p>The research aimed to look at how this type of human lexical creativity observed in both children and adult language users can be understood in a unified framework, says Xu, a senior author on the paper and an associate professor in the&nbsp;department of computer science&nbsp;in the Faculty of Arts &amp; Science and the&nbsp;cognitive science program at University College.</p> <p>“A common strategy of human lexical creativity is to use words we know to express something new, so that we can save the effort of creating new words. Our paper offers a unified view of the various processes of word meaning extension observed at different timescales, across populations and within individuals.”</p> <p>Word meaning extension is often observed in the historical change or evolution of language, Xu adds. For example, the word “mouse” in English originally meant a type of rodent, but now also refers to a computer device.</p> <p>On the other hand, word meaning extension is also observed in children as early as two years of age. For example, children sometimes use the word “ball” to refer to “balloon,” presumably because they haven’t yet acquired the right word to describe the latter object, so they overextend a known word to express what they want to say.&nbsp;</p> <p>“In this study, we ask whether processes of word meaning extension at two very different timescales, in language evolution, which takes place over hundreds or thousands of years, and in child’s language development, which typically occurs in the order of months or years, have something in common with each other,” Xu says. “Our hypothesis is that these processes are fundamentally the same.</p> <p>“We find, indeed, there’s a unified foundation underlying these processes. There is a shared repertoire of knowledge types that underlies word meaning extension in both language evolution and language development.”</p> <p>To test their hypothesis and figure out what is in common between products of language learning and language evolution, the team built a computational model that takes pairs of meanings or concepts as input, such as “ball” versus “balloon,” “door” versus “key” or “fire” versus “flame,” and makes a prediction about how likely these concepts can be co-expressed under the same word.</p> <p>In building their model, the researchers constructed a knowledge base that helps identify similarity relations between concepts as they “believe it is the key that makes people relate meanings in word meaning extension,” Xu says.</p> <p>The knowledge base consists of four primary knowledge types grounded in human experience: visual perception, associative knowledge, taxonomic knowledge and affective knowledge. Pairs of concepts score high if they are measured to be similar in one or some of these knowledge types.</p> <p>“The pair of concepts like ‘ball’ and ‘balloon’ would score high due to their visual similarity, whereas ‘key’ and ‘door’ would score high because they are thematically related or often co-occur together in daily scenarios,” Xu explains. “On the contrary, for a pair of concepts such as ‘water’ and ‘pencil,’ they would have little similarity measured in any of the four knowledge types, so that pair would receive a low score. As a result, the model would predict they can’t, or they are unlikely to, extend to each other.”&nbsp;</p> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2023-08/fig1%20%28002%29%20-%20lexical%20creativity%20and%20framework.jpg?itok=mcf6fHeF" width="750" height="971" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption>Figure A: Researchers demonstrate examples of child overextension that are also found in language evolution.<br> Figure B: The team developed a computational framework for investigating the possibility of a common foundation in lexical creativity.&nbsp; (images by Yang Xu)</figcaption> </figure> <p>Xu notes the team found all four knowledge types contributed to word meaning extension and a model that incorporates these types tends to better predict data than alternative models that rely on fewer or individual knowledge types.</p> <p>“This finding tells us that word meaning extension relies on multifaceted and grounded knowledge based on people’s perceptual, affective and common-sense knowledge,” he says.</p> <p>Built exclusively from children’s word meaning extension data, the model can successfully predict word meaning extension patterns from language evolution and can also make predictions in the reverse direction on children’s overextension when trained on language evolution data.</p> <p>“This cross-predictive analysis suggests that there are shared knowledge types between children’s word meaning extension and the products of language evolution, despite the fact that they occur at very different timescales. These processes both rely on a common core knowledge foundation – together these findings help us understand word meaning extension in a unified way,” Xu says.</p> <p>Xu stresses that existing research on child overextension has been typically discussed in the context of developmental psychology whereas word meaning extension in history is typically discussed in historical and computational linguistics, so this project aims to build a tighter connection between the two fields of research.</p> <p>The researchers hope that additional computational modelling will shed light on other potential lines of inquiry, including the basic mechanisms at play in the historical evolution of word meanings and the emergence of word meanings in child development, as well as the origins of different semantic knowledge types and how they are acquired.</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Tue, 22 Aug 2023 13:22:31 +0000 Christopher.Sorensen 302704 at Researchers develop interactive ‘Stargazer’ camera robot that can help film tutorial videos /news/researchers-develop-interactive-stargazer-camera-robot-can-help-film-tutorial-videos <span class="field field--name-title field--type-string field--label-hidden">Researchers develop interactive ‘Stargazer’ camera robot that can help film tutorial videos</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2023-05/20230515_Stargazer-Robot_Jiannan-Li_02_edit-crop.jpg?h=afdc3185&amp;itok=ytPWrguY 370w, /sites/default/files/styles/news_banner_740/public/2023-05/20230515_Stargazer-Robot_Jiannan-Li_02_edit-crop.jpg?h=afdc3185&amp;itok=D3scWMZG 740w, /sites/default/files/styles/news_banner_1110/public/2023-05/20230515_Stargazer-Robot_Jiannan-Li_02_edit-crop.jpg?h=afdc3185&amp;itok=KgHRDWb- 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2023-05/20230515_Stargazer-Robot_Jiannan-Li_02_edit-crop.jpg?h=afdc3185&amp;itok=ytPWrguY" alt="&quot;&quot;"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>siddiq22</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2023-05-19T15:02:56-04:00" title="Friday, May 19, 2023 - 15:02" class="datetime">Fri, 05/19/2023 - 15:02</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p>Research led by 鶹Ƶ computer science PhD candidate Jiannan Li explores how an interactive camera robot can assist instructors and others in making how-to videos (photo by Matt Hintsa)</p> </div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/faculty-information" hreflang="en">Faculty of Information</a></div> <div class="field__item"><a href="/news/tags/machine-learning" hreflang="en">machine learning</a></div> <div class="field__item"><a href="/news/tags/research-innovation" hreflang="en">Research &amp; Innovation</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>A group of&nbsp;computer scientists from the University of Toronto&nbsp;wants to make it easier to film&nbsp;how-to videos.&nbsp;</p> <p>The team of researchers&nbsp;<a href="http://www.dgp.toronto.edu/~jiannanli/stargazer/stargazer.html">have developed Stargazer</a>, an interactive camera robot that helps university instructors and other content creators create engaging tutorial videos demonstrating physical skills.</p> <p>For those&nbsp;without access to a cameraperson, Stargazer can capture dynamic instructional videos and address the constraints of working with static cameras.</p> <p>“The robot is there to help humans, but not to replace humans,” explains lead researcher&nbsp;<a href="https://www.dgp.toronto.edu/~jiannanli" target="_blank"><strong>Jiannan Li</strong></a>, a PhD candidate in 鶹Ƶ's department of computer science in the Faculty of Arts &amp; Science.</p> <p>“The instructors are here to teach. The robot’s role is to help with filming –&nbsp;the heavy-lifting work.”</p> <p>The Stargazer work is outlined in a&nbsp;<a href="https://dl.acm.org/doi/abs/10.1145/3544548.3580896">published paper</a>&nbsp;presented this year at the Association for Computing Machinery Conference on Human Factors in Computing Systems, a leading international conference in human-computer interaction.</p> <p>Li’s co-authors include fellow members of 鶹Ƶ's&nbsp;<a href="https://www.dgp.toronto.edu/">Dynamic Graphics Project</a>&nbsp;(dgp) lab: postdoctoral researcher&nbsp;<a href="https://mauriciosousa.github.io/" target="_blank"><strong>Mauricio Sousa</strong></a>, PhD students&nbsp;<a href="https://karthikmahadevan.ca/" target="_blank"><strong>Karthik Mahadevan</strong></a>&nbsp;and&nbsp;<a href="https://www.dgp.toronto.edu/~bryanw/" target="_blank"><strong>Bryan Wang</strong></a>, Professor&nbsp;<a href="https://www.dgp.toronto.edu/~ravin/" target="_blank"><strong>Ravin Balakrishnan</strong></a>&nbsp;and Associate Professor&nbsp;<a href="https://www.tovigrossman.com/" target="_blank"><strong>Tovi Grossman</strong></a>; as well as Associate Professor&nbsp;<a href="https://ischool.utoronto.ca/profile/tony-tang/" target="_blank"><strong>Anthony Tang</strong></a>&nbsp;(cross-appointed with the Faculty of Information);&nbsp;recent 鶹Ƶ Faculty of Information graduates&nbsp;<strong>Paula Akemi Aoyaui</strong>&nbsp;and&nbsp;<strong>Nicole Yu</strong>; and third-year computer engineering student Angela Yang.</p> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/2023-05/Fig14_3x2.jpg" width="1500" height="1000" alt="&quot;&quot;"> </div> </div> <figcaption><em>A study participant uses the interactive camera robot Stargazer to record a how-to video on skateboard maintenance&nbsp;(supplied photo)</em></figcaption> </figure> <p>Stargazer uses a single camera on a robot arm, with seven independent motors that can move along with the video subject by autonomously tracking regions of interest. The system’s camera behaviours can be adjusted based on subtle cues from instructors, such as body movements, gestures and speech that are detected by the prototype’s sensors.</p> <p>The instructor’s voice&nbsp;is recorded with a wireless microphone and sent to Microsoft Azure Speech-to-Text, a speech-recognition software.&nbsp;The transcribed text, along with a custom prompt, is then sent to the GPT-3 program,&nbsp;a large language model&nbsp;which labels the instructor’s intention for the camera&nbsp;–&nbsp;such as a standard versus&nbsp;high angle and normal versus&nbsp;tighter framing.</p> <p>These camera control commands are cues naturally used by instructors to guide the attention of their audience and are not disruptive to instruction delivery, the researchers say.</p> <p>For example, the instructor can have Stargazer adjust its view to look at each of the tools they will be using during a tutorial by pointing to each one, prompting the camera to pan around. The instructor can also say to viewers, "If you look at how I put ‘A’ into ‘B’ from the top,” Stargazer will respond by framing the&nbsp;action with a high angle to give the audience a better view.</p> <p>In designing the interaction vocabulary, the team wanted to identify signals that are subtle and avoid the need for the instructor to communicate separately to the robot while speaking to their students or audience.</p> <p>“The goal is to have the robot understand in real time what kind of shot the instructor wants," Li says.&nbsp;"The important part of this goal is that we want these vocabularies to be non-disruptive. It should feel like they fit into the tutorial."</p> <p>Stargazer’s abilities were put to the test in a study involving six instructors, each teaching a distinct skill to create dynamic tutorial videos.</p> <p>Using the robot,&nbsp;they were able to produce videos demonstrating physical tasks on a diverse range of subjects, from skateboard maintenance to interactive sculpture-making and&nbsp;setting up virtual-reality headsets, while relying on the robot for subject tracking, camera framing and camera angle combinations.</p> <p>The participants were each given a practice session and completed their tutorials within two takes. The researchers reported all of the participants were able to create videos without needing any additional controls than what was provided by the robotic camera and were satisfied with the quality of the videos produced.</p> <p>While Stargazer’s range of camera positions is sufficient for tabletop activities, the team is interested in exploring the potential of camera drones and robots on wheels to help with filming tasks in larger environments from a wider variety of angles.</p> <p>They also found some study participants attempted to trigger object shots by giving or showing objects to the camera, which were not among the cues that Stargazer currently recognizes. Future research could investigate methods to detect diverse and subtle intents by combining simultaneous signals from an instructor’s gaze, posture and speech, which Li says is a long-term goal the team is making progress on.</p> <p><iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen frameborder="0" height="422" src="https://www.youtube.com/embed/fQ9JeptOgZ0" title="YouTube video player" width="750"></iframe></p> <p>While the team presents Stargazer as an option for those who do not have access to professional film crews, the researchers admit the robotic camera prototype relies on an expensive robot arm and a suite of external sensors. Li notes, however, that the&nbsp;Stargazer concept is not necessarily limited by costly technology.</p> <p>“I think there’s a real market for robotic filming equipment, even at the consumer level. Stargazer is expanding that realm,&nbsp;but looking farther ahead with a bit more autonomy and a little bit more interaction. So&nbsp;realistically, it could&nbsp;be available to consumers,” he says.</p> <p>Li says the team is excited by the possibilities Stargazer presents for greater human-robot collaboration.</p> <p>“For robots to work together with humans, the key is for robots to understand humans better. Here, we are looking at these vocabularies, these typically human communication behaviours,” he&nbsp;explains.</p> <p>“We hope to inspire others to look at understanding how humans communicate ... and how robots can pick that up and have the proper reaction, like assistive behaviours."</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> <div class="field field--name-field-add-new-author-reporter field--type-entity-reference field--label-above"> <div class="field__label">Add new author/reporter</div> <div class="field__items"> <div class="field__item"><a href="/news/authors-reporters/krystle-hewitt" hreflang="en">Krystle Hewitt</a></div> </div> </div> Fri, 19 May 2023 19:02:56 +0000 siddiq22 301759 at