The Institute for the History and Philosophy of Science and Technology / en 鶹Ƶ experts tackle questions about AI safety, ethics during panel discussion /news/u-t-experts-tackle-questions-about-ai-safety-ethics-during-panel-discussion <span class="field field--name-title field--type-string field--label-hidden">鶹Ƶ experts tackle questions about AI safety, ethics during panel discussion</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2024-09/_DJC7384-crop.jpg?h=8ff31e88&amp;itok=CnUVindP 370w, /sites/default/files/styles/news_banner_740/public/2024-09/_DJC7384-crop.jpg?h=8ff31e88&amp;itok=wFB73LpO 740w, /sites/default/files/styles/news_banner_1110/public/2024-09/_DJC7384-crop.jpg?h=8ff31e88&amp;itok=YAREtckR 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2024-09/_DJC7384-crop.jpg?h=8ff31e88&amp;itok=CnUVindP" alt="&quot;&quot;"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>rahul.kalvapalle</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2024-10-02T14:33:43-04:00" title="Wednesday, October 2, 2024 - 14:33" class="datetime">Wed, 10/02/2024 - 14:33</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><em>From left: 鶹Ƶ's Roger Grosse, Sedef Kocak, Sheila McIlraith and Karina Vold take part in a panel discussion on AI safety (photo by Duane Cole)</em></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/kyle-coulter" hreflang="en">Kyle Coulter</a></div> </div> <div class="field field--name-field-secondary-author-reporter field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/jovana-jankovic" hreflang="en">Jovana Jankovic</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/institute-history-and-philosophy-science-and-technology" hreflang="en">The Institute for the History and Philosophy of Science and Technology</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/centre-ethics" hreflang="en">Centre for Ethics</a></div> <div class="field__item"><a href="/news/tags/department-computer-science" hreflang="en">Department of Computer Science</a></div> <div class="field__item"><a href="/news/tags/department-philosophy" hreflang="en">Department of Philosophy</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/vector-institute" hreflang="en">Vector Institute</a></div> <div class="field__item"><a href="/news/tags/victoria-college" hreflang="en">Victoria College</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">"We should be building AI systems that promote human flourishing – that allow human beings to live with dignity and purpose, and to be valued contributors to society”&nbsp;&nbsp;</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>What does safe artificial intelligence look like? Could AI go rogue and pose an existential threat to humanity? How have media portrayals of AI influenced people’s perceptions of the technology’s benefits and risks?</p> <p>These were among the pressing questions tackled by four experts at the University of Toronto and its partner institutions – in disciplines ranging from computer science to philosophy – during a recent panel discussion on AI safety.</p> <p><strong>Sheila McIlraith</strong>, professor in 鶹Ƶ’s department of computer science at the Faculty of Arts &amp; Science and Canada CIFAR AI Chair at the Vector Institute, said the notion of AI safety evokes different things to different people.&nbsp;</p> <p>“Computer scientists often think about safety critical systems – the types of systems that we’ve built to send astronauts to the moon or control our nuclear power plants – but AI safety is actually quite different,” said McIlraith, an associate director at the 鶹Ƶ’s <a href="https://srinstitute.utoronto.ca">Schwartz Reisman Institute for Technology and Society</a> (SRI).</p> <p>“For me personally, I have a higher bar, and I really think we should be building AI systems that promote human flourishing – that allow human beings to live with dignity and purpose, and to be valued contributors to society.”&nbsp;&nbsp;</p> <p>The event, hosted by SRI in partnership with the <a href="https://vectorinstitute.ai">Vector Institute</a>, the <a href="https://ihpst.utoronto.ca">Institute for the History &amp; Philosophy of Science &amp; Technology</a>, the <a href="https://ethics.utoronto.ca">Centre for Ethics</a> and <a href="https://www.vic.utoronto.ca">Victoria College</a>, invited McIlraith and her fellow panelists to discuss how AI technologies can be aligned with human values in an increasingly automated world.</p> <p>They also discussed how risks surrounding the technology can be mitigated in different sectors.</p> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2024-09/_DJC7290-crop.jpg?itok=HAe8oD2Q" width="750" height="501" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>Karina Vold, the event’s moderator, underscored the challenge of building safe AI systems in an uncertain world (photo by Duane Cole)</em></figcaption> </figure> <p>Moderator,&nbsp;<strong>Karina Vold</strong>, assistant professor in the Institute for the History &amp; Philosophy of Science &amp; Technology in the Faculty of Arts &amp; Science, noted that because AI systems operate “in a world filled with uncertainty and volatility, the challenge of building safe and reliable AI is not easy and mitigation strategies vary widely.”&nbsp;</p> <p>She proceeded to ask the panel to share their thoughts on the portrayal of AI in popular culture.&nbsp;</p> <p>“The media devotes more attention to different aspects of AI – the social, philosophical, maybe even psychological,” said&nbsp;<strong>Sedef Kocak</strong>, director of AI professional development at the Vector Institute.&nbsp;</p> <p>“These narratives are important to help show the potential fears, as well as the positive potential of the technology.”</p> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2024-09/_DJC7298-crop.jpg?itok=O2pDcVyg" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>The discussion touched on several topics related to AI safety (photo by Duane Cole)</em></figcaption> </figure> <p><strong>Roger Grosse</strong>, associate professor in 鶹Ƶ’s department of computer science in the Faculty of Arts &amp; Science and a founding member of the Vector Institute, said that safety concerns around AI are not merely rooted in science and pop culture, but also in philosophy.&nbsp;</p> <p>“Many people think that the public’s concerns regarding AI risks come from sci-fi, but I think the early reasoning regarding AI risks actually has its roots in philosophy,” said Grosse, who also holds Schwartz Reisman Chair in Technology and Society.&nbsp;&nbsp;</p> <p>“If we’re trying to reason about AI systems that don’t yet exist, we don’t have the empirical information, and don’t yet know what their design would be, what we can do is come up with various thought experiments. For example, what if we designed an AI that has some specific role, and all of the actions that it takes are in service of the role?</p> <p>“For the last decade, a lot of the reasons for being concerned about the long-term existential risks really came from this careful philosophical reasoning.”</p> <p>The discussion also touched on the dangers of AI models misaligning themselves, how to guard against bias in the training of large language models, and how to ensure that AI models with potentially catastrophic capabilities are safeguarded.</p> <p>“This [safeguarding] is an area where new research ideas and principles will be required to make the case,” said Grosse. “Developers saying, ‘Trust us’ is not sufficient. It’s not a good foundation for policy.”&nbsp;</p> <p>Despite addressing topics surrounding potential harms and risks of AI, the panelists also shared their optimism about how AI can be wielded for the greater good – with Grosse noting AI offers the promise of making knowledge more widely accessible, and Kocak focusing on the myriad benefits for industries.</p> <p><strong>Watch the Sept. 10 conversation below:</strong></p> <p><iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen frameborder="0" height="500" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/Z1EqkTrotHE?si=xCuaVunRk0e7YDDt" title="YouTube video player" width="750"></iframe></p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Wed, 02 Oct 2024 18:33:43 +0000 rahul.kalvapalle 309490 at Can AI deliver therapy? 鶹Ƶ PhD candidate examines the pros and cons /news/phd-candidate-rachel-katz-examines-pros-and-cons-ai-therapy <span class="field field--name-title field--type-string field--label-hidden">Can AI deliver therapy? 鶹Ƶ PhD candidate examines the pros and cons</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2023-07/GettyImages-1291606290-crop.jpg?h=afdc3185&amp;itok=VIe9gzjb 370w, /sites/default/files/styles/news_banner_740/public/2023-07/GettyImages-1291606290-crop.jpg?h=afdc3185&amp;itok=WrymCJVJ 740w, /sites/default/files/styles/news_banner_1110/public/2023-07/GettyImages-1291606290-crop.jpg?h=afdc3185&amp;itok=KI0wIhU3 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2023-07/GettyImages-1291606290-crop.jpg?h=afdc3185&amp;itok=VIe9gzjb" alt="&quot;&quot;"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2023-07-28T16:00:38-04:00" title="Friday, July 28, 2023 - 16:00" class="datetime">Fri, 07/28/2023 - 16:00</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><em>(photo by&nbsp;Igor Kutyaev/Getty Images)</em></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/sean-mcneely" hreflang="en">Sean McNeely</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/institute-history-and-philosophy-science-and-technology" hreflang="en">The Institute for the History and Philosophy of Science and Technology</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/graduate-students" hreflang="en">Graduate Students</a></div> <div class="field__item"><a href="/news/tags/mental-health" hreflang="en">Mental Health</a></div> <div class="field__item"><a href="/news/tags/research-innovation" hreflang="en">Research &amp; Innovation</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>Earlier this year, the U.S.-based National Eating Disorder Association shut down its AI-powered chatbot after it provided unsafe advice to people seeking help.</p> <p>The chatbot, called “Tessa,” instructed people with eating disorders to greatly reduce their daily calorie intake, according to complaints.</p> <p>“The chatbot was suggesting callers try things that gave them eating disorders in the first place,” says&nbsp;<a href="https://www.rakatz.com/"><strong>Rachel Katz</strong></a>, a PhD candidate with the Faculty of Arts &amp; Science’s&nbsp;<a href="https://www.ihpst.utoronto.ca/">Institute for the History &amp; Philosophy of Science &amp; Technology</a>.</p> <p>Her research focuses on bioethics, as well as the philosophy of medicine and psychiatry&nbsp;– including an interest in AI ethics.</p> <p>Katz is in the early stages of her doctoral research that examines the pros and cons of AI-facilitated psychotherapy, specifically AI-delivered therapy that doesn’t involve a clinician at any point.</p> <p>Her work received coverage from major news outlets this summer, following her research presentation at the Congress 2023 gathering of the <a href="https://cshps.ca/">Canadian Society for the History and Philosophy of Science</a> at York University in May.</p> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2023-07/rachel-katz-crop.jpg?itok=pCu5QZq8" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>Rachel Katz believes AI-delivered psychotherapy can be a useful tool in certain situations(photo courtesy of Rachel Katz)</em></figcaption> </figure> <p>“There are several ways that AI has come to be part of psychotherapy,” Katz says. “I'm focusing on the patient interaction aspect. I’m looking at apps ... that don't require interaction with a human therapist.</p> <p>“I'm not against the use of AI therapy chatbots – they are a useful tool, but they need to be understood and regulated properly before they spin out of control. Currently, we don't have sufficient rules and guidelines for the kinds of things these chatbots could be useful for.”</p> <p>Katz believes the difference between an AI chatbot and a human therapist is trust, as opposed to reliance.</p> <p>“When we have a good relationship with a human therapist, we've formed a trust-based relationship built on goodwill and vulnerability,” she says.</p> <p>Conversely, Katz describes the reliant relationship with AI therapy as “talking to an affectionate wall.”</p> <p>“You can have this supportive, helpful experience working with an AI therapist,” Katz says. “There's no vulnerability, but you can rely on it to be there.”</p> <p>But she notes there is something fundamentally different about human-to-human relationships&nbsp;– and part of that comes from the fact that a therapist can make mistakes.</p> <p>“You may have a great working relationship with a therapist, and then they suggest something that doesn't work for you,” Katz says. “Or they may say, ‘Here's how I've interpreted what you've told me,’ and you correct them.”</p> <p>That could become problematic with misdiagnosing or mistreating a mental health concern.</p> <p>“But part of that ability to mess up represents that ‘special human element’ that I’m exploring – and it’s one of the things that makes [us] trust someone in general,” she says. “That's something you completely lose out on with an AI therapist.”</p> <p>That element of vulnerability can flow both ways&nbsp;– a human therapist can make themselves vulnerable by offering insights into their personal lives, which can also strengthen a bond with a patient.</p> <p>Katz plans to further investigate what makes that human relationship special&nbsp;– something she feels she hasn’t fully uncovered yet.</p> <p>“That’s the big philosophical question underpinning the whole project,” she says.</p> <p>Katz also questions whether AI chatbots are effective when dealing with crises. For example, chatbots are not considered very effective when assisting someone who is suicidal.</p> <p>“They will just direct you to call 911 or some other kind of emergency service,” Katz says. “That's not always a good solution for people who may be very distressed.”</p> <p>Despite the challenges, Katz believes AI has some advantages.</p> <p>For example, crisis lines are often staffed by volunteers, and such work can take a heavy emotional toll.</p> <p>“You could argue that turning those call systems into AI saves the emotional burden from volunteers,” Katz says.</p> <p>Distance and accessibility could also be a factor – if a person living remotely has to drive hours for an appointment, AI might be a more convenient option.</p> <p>“Or if you're someone who works strange hours&nbsp;– say you work a night shift and need to see a therapist&nbsp;– that's also a difficult situation that might be better suited to AI,” Katz says.</p> <p>But AI’s biggest appeal may be the fact that it’s impersonal.</p> <p>Some people – particularly those who have never taken part in therapy before&nbsp;– may feel more at ease chatting to AI rather than a person when it comes to sharing their problems and issues.</p> <p>“I was teaching a class and asked my students, ‘How would you feel about having a therapist?’ A shocking number of them said they were keen on first talking to an AI therapist,” Katz says.</p> <p>“They were nervous about the idea of expressing difficult emotions to another human. They preferred having something that could listen but was incapable of human judgment. Ideally, a therapist is non-judgmental, but someone who's seeking out a resource for the first time may not be aware of that.”</p> <p>As Katz continues her research, she does see AI-based psychotherapy working well in certain situations&nbsp;–&nbsp;with the proper guidelines and disclaimers.</p> <p>“Ultimately, I want to advocate for patient choice,” she says. “I wouldn't want to deny a patient their ability to make the choice for what intervention they feel will make the most sense for them. If the type of therapy or the method of delivery of therapy doesn't work for the patient, the treatment is not going to be effective.”</p> <p>As she delves deeper into the subject, Katz sees real-world applications for her work.</p> <p>“The goal is to do some philosophical investigating and hopefully come up with some answers that are philosophically interesting and can also help inform policy development in this area.”</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Fri, 28 Jul 2023 20:00:38 +0000 Christopher.Sorensen 302451 at Brave new tech: Experts say AI tools like ChatGPT – and the ethical questions they raise – are here to stay /news/brave-new-tech-experts-say-ai-tools-chatgpt-and-ethical-questions-they-raise-are-here-stay <span class="field field--name-title field--type-string field--label-hidden">Brave new tech: Experts say AI tools like ChatGPT – and the ethical questions they raise – are here to stay</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2023-04/AI-Group.jpeg?h=afdc3185&amp;itok=lfyrDkWw 370w, /sites/default/files/styles/news_banner_740/public/2023-04/AI-Group.jpeg?h=afdc3185&amp;itok=pd3qPcVJ 740w, /sites/default/files/styles/news_banner_1110/public/2023-04/AI-Group.jpeg?h=afdc3185&amp;itok=Nna4PMss 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2023-04/AI-Group.jpeg?h=afdc3185&amp;itok=lfyrDkWw" alt="(Clockwise from top left) Catherine Moore, Ashton Anderson, Karina Vold, Paul Bloom, Valérie Kindarji and Paolo Granata (supplied images, photo of Bloom by Greg Martin)"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>siddiq22</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2023-04-05T14:44:10-04:00" title="Wednesday, April 5, 2023 - 14:44" class="datetime">Wed, 04/05/2023 - 14:44</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p>(Clockwise from top left) Catherine Moore, Ashton Anderson, Karina Vold, Paul Bloom, Valérie Kindarji and Paolo Granata (supplied images, photo of Bloom by Greg Martin)</p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/chris-sasaki" hreflang="en">Chris Sasaki</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/institute-history-and-philosophy-science-and-technology" hreflang="en">The Institute for the History and Philosophy of Science and Technology</a></div> <div class="field__item"><a href="/news/tags/school-cities" hreflang="en">School of Cities</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/centre-ethics" hreflang="en">Centre for Ethics</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-music" hreflang="en">Faculty of Music</a></div> <div class="field__item"><a href="/news/tags/political-science" hreflang="en">Political Science</a></div> <div class="field__item"><a href="/news/tags/psychology" hreflang="en">Psychology</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><div class="image-with-caption right"> <p><img alt src="/sites/default/files/2023-04/DALL%C2%B7E%202023-03-08%2015.22.02%20-%20an%20image%20of%20the%20campus%20of%20the%20university%20of%20toronto%20in%20the%20style%20of%20van%20gogh%27s%20starry%20night.jpg" style="margin-left: 0px; margin-right: 0px; width: 300px; height: 300px;"><em>This image&nbsp;was created by directing Dall-e to produce&nbsp;an image&nbsp;of the University of Toronto in the style of painter Vincent van Gogh’s The Starry Night (Image by DALL-E/ directed by Chris Sasaki)</em></p> </div> <p>As artificial intelligence (AI) continues to rapidly advance, there has been a surge in the development of AI-powered content creation tools&nbsp;<a href="https://openai.com/blog/chatgpt">like ChatGPT</a>&nbsp;<a href="https://openai.com/product/dall-e-2">and Dall-e</a>&nbsp;that offer users a range of personalized experiences. However, with this growth come concerns about the potential dangers and ramifications of such apps, from privacy concerns to the displacement of human workers.</p> <p>For example, the previous paragraph was written by ChatGPT, illustrating the blurring of lines between AI- and human-generated content. And the&nbsp;image at right&nbsp;was created by directing Dall-e to produce an image of “the University of Toronto in the style of van Gogh’s&nbsp;<em>The Starry Night</em>.”</p> <p>In recent months, news headlines have outlined the issues relating to generative AI tools and content. Illustrators, graphic designers, photographers, musicians and writers have expressed concerns about losing income to generative AI and having their creations used as source material without permission or compensation.</p> <p>On the academic front, instructors are having to cope with students submitting work written by ChatGPT and are re-evaluating how best to teach and assess courses as a result. Institutions such as&nbsp;鶹Ƶ are examining the ramifications of this technology and providing&nbsp;<a href="https://www.viceprovostundergrad.utoronto.ca/strategic-priorities/digital-learning/special-initiative-artificial-intelligence/" rel="noopener noreferrer" target="_blank">guidelines for students and instructors</a>.</p> <p>Despite the challenges, many experts say&nbsp;that the technology is here to stay, and that our focus should be on establishing guidelines and safeguards for its use, while&nbsp;others look to its positive potential.</p> <p>Faculty of Arts &amp; Science writer&nbsp;<strong>Chris Sasaki</strong>&nbsp;spoke with six 鶹Ƶ experts about the impact of generative AI tools –&nbsp;and the ethical questions posed by the new technology.</p> <hr> <h3><strong><a href="https://www.cs.toronto.edu/~ashton/" rel="noopener noreferrer" target="_blank">Ashton Anderson</a></strong></h3> <p><em><strong>Assistant professor, department of computer science</strong></em></p> <p><img alt src="/sites/default/files/2023-04/ashton-anderson-small.jpeg" style="margin: 5px 15px; float: left; width: 150px; height: 150px;">We are increasingly seeing AI game-playing, text generation and artistic expression tools that are designed to simulate a specific person. For example, it is easy to imagine AI models that play in the style of chess champion Magnus Carlsen, write like a famous author, or interact with students like their favourite teacher’s assistant. My colleagues and I refer to these as mimetic models –&nbsp;they mimic specific individuals –&nbsp;and they raise important social and ethical issues across a variety of applications.</p> <p>Will they be used to deceive others into thinking they are dealing with a real person&nbsp;–&nbsp;a business colleague, celebrity or political figure? What happens to an individual’s value or worth when a mimetic model performs well enough to replace that person? Conversely, what happens when the model exhibits bad behaviour&nbsp;–&nbsp;how does that affect the reputation of the person being modeled? And in all these scenarios, has consent been given by the person being modelled? It is vital to consider all of these questions as these tools increasingly become part of our everyday lives.</p> <h3><strong><a href="https://www.psych.utoronto.ca/people/directories/all-faculty/paul-bloom" rel="noopener noreferrer" target="_blank">Paul Bloom</a></strong></h3> <p><em><strong>Professor, department of psychology</strong></em></p> <p><img alt src="/sites/default/files/2023-04/Paul-Bloom-Credit-Greg-Martin-crop.jpeg" style="margin: 5px 15px; float: left; width: 150px; height: 150px;"></p> <p>What ChatGPT and other generative AI tools are doing right now is very impressive and also very scary. There are many questions about their capabilities that we don’t know the answers to. We don’t know their limits&nbsp;–&nbsp;whether there will be some things that a text generator is fundamentally incapable of doing. They can write short pieces, or write in the style of a certain person, but could they write a longer book?</p> <p>Some people don’t think they’ll be capable of a task like that, because these tools use deep-learning statistics&nbsp;–&nbsp;they produce sentences, then predict what comes next. But they lack the fundamentals of human thought. And until they possess those fundamentals, they’ll never come close to writing like we do. We have many things they don’t: we have a model of the world in our minds, mental representations of our homes, our friends. And we have memories. Machines don’t have those and until they do, they won’t be human –&nbsp;and they won’t be able to write, illustrate and create the way we do.</p> <h3><strong><a href="https://stmikes.utoronto.ca/about-us/contact-us/directory/paolo-granata" rel="noopener noreferrer" target="_blank">Paolo Granata</a></strong></h3> <p><strong>Associate professor, Media Ethics Lab; book and&nbsp;media studies, St. Michael’s College</strong></p> <p><img alt src="/sites/default/files/2023-04/Paolo-Granata-paolo_granata-small.jpeg" style="margin: 5px 15px; float: left; width: 150px; height: 150px;">AI literacy is key. Whether something is viewed as a threat or an opportunity, the wisest course of action is to comprehend it. For instance, since there are tasks that AI does more effectively than humans, let’s concentrate on tasks that humans do better than AI. The emergence of widely accessible generative AI technologies should also motivate educators to reconsider pedagogy, assignments and the whole learning process.</p> <p>AI is an eye-opener. The function of educators in the age of AI has to be re-evaluated – educators should be experience-designers rather than content providers. In education, the context is more important than the content. Now that we have access to such powerful content producers, we can focus primarily on a proactive learning approach.</p> <h3><strong><a href="https://politics.utoronto.ca/phd-candidate/kindarji-valerie/" rel="noopener noreferrer" target="_blank">Valérie Kindarji</a></strong></h3> <p><strong>PhD candidate, department of political science</strong></p> <p><img alt src="/sites/default/files/2023-04/Valerie-Kindarji-small.jpeg" style="margin: 5px 15px; float: left; width: 150px; height: 150px;">While public focus has been on the disruptive AI technologies themselves, we cannot forget about the people behind the screen using these tools. Our democracy requires informed citizens with access to high-quality information, and digital literacy is crucial for us to understand these technologies so we can best leverage them. It is empowering to have access to tools which can help spark our creativity and summarize information in a split second.</p> <p>But while it is important to know what these tools can do to help us move forward, it is just as important to learn and recognize their limitations. In the age of information overload, digital literacy can provide us with pathways to exercise our critical thinking online, to understand the biases impacting the output of AI tools&nbsp;and to be discerning consumers of information. The meaning of literacy continues to evolve with technology, and we ought to encourage initiatives which help us learn how to navigate the online information ecosystem. Ultimately, we will be better citizens and neighbours for it.</p> <h3><strong><a href="https://www.schoolofcities.utoronto.ca/people/directories/all-faculty/catherine-moore" rel="noopener noreferrer" target="_blank">Catherine Moore</a></strong></h3> <p><strong>Adjunct professor, School of Cities; Faculty of Music</strong></p> <p><img alt src="/sites/default/files/2023-04/catherine-moore-crop.jpeg" style="margin: 5px 15px; float: left; width: 150px; height: 150px;">Would seeing a credit at the end of a film, ‘Original score generated by Google Music,’ alter my appreciation of the score? I don't think so. Music in a film is meant to produce an emotional impact. That’s its purpose. And if a score created by AI was successful in doing that, then it’s done its job –&nbsp;regardless of how it was created.</p> <p>What’s more, generative AI “composers” raise the questions: What is sound;&nbsp;what is music? What is natural sound;&nbsp;what is artificial sound? These questions go back decades, with people capturing mechanical sounds or sounds from nature. You speed them up, slow them down. You do all sorts of things to them. The whole electro-acoustic music movement was created by musicians using technology to manipulate acoustic sounds to create something new.</p> <p>I see the advent of AI-generated music as part of a natural progression in the long line of music creators using new technologies with which to create and produce –&nbsp;in order to excite, intrigue, surprise, delight and mystify listeners the way they always have.</p> <h3><strong><a href="https://philosophy.utoronto.ca/directory/karina-vold/" rel="noopener noreferrer" target="_blank">Karina Vold</a></strong></h3> <p><strong>Assistant professor, Institute for the History &amp; Philosophy of Science &amp; Technology;&nbsp;Centre for Ethics;&nbsp;Schwartz Reisman Institute for Technology &amp; Society</strong></p> <p><img alt src="/sites/default/files/2023-04/karina-vold-portrait-crop.jpeg" style="margin: 5px 15px; float: left; width: 150px; height: 150px;">The progress of these tools is exciting, but there are many risks. For example, there’s bias in these systems that reflects human bias. If you asked a tool like ChatGPT to name ten famous philosophers, it would respond with ten Western male philosophers. And when you then asked for female philosophers, it would still only name Western philosophers. So,&nbsp;<a href="https://openai.com/product/gpt-4">GPT-4 is Open AI’s attempt</a>&nbsp;to respond to these concerns, but unfortunately, they haven’t all been addressed.</p> <p>In his book&nbsp;<a href="https://press.princeton.edu/books/hardcover/9780691122946/on-bullshit" rel="noopener noreferrer" target="_blank"><em>On Bullshit</em></a>, [moral philosopher]&nbsp;<a href="https://www.americanacademy.de/person/harry-frankfurt/" rel="noopener noreferrer" target="_blank">Harry Frankfurt</a>&nbsp;argues that "bullshitters" are more dangerous than liars, because liars at least keep track of their lies and remember what’s true and what’s a lie. But bullshitters just don't care. Well, ChatGPT is a bullshitter –&nbsp;it doesn’t care about the truth of its statements. It makes up content and it makes up references. And the problem is that it gets some things right some of the time, so users start to trust it –&nbsp;and that’s a major concern.</p> <p>Lawmakers need to catch up in terms of regulating these generative AI companies. There’s been internal review by some companies, but that’s not enough. My view is there should be ethics review boards and even laws regulating this new technology.</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Wed, 05 Apr 2023 18:44:10 +0000 siddiq22 301054 at 鶹Ƶ researchers investigate the hidden history of 'Black androids' /news/u-t-researchers-explore-hidden-history-black-androids <span class="field field--name-title field--type-string field--label-hidden">鶹Ƶ researchers investigate the hidden history of 'Black androids'</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2023-04/Blackness-and-technology.jpeg?h=afdc3185&amp;itok=g_3lgRdF 370w, /sites/default/files/styles/news_banner_740/public/2023-04/Blackness-and-technology.jpeg?h=afdc3185&amp;itok=PwAImpzO 740w, /sites/default/files/styles/news_banner_1110/public/2023-04/Blackness-and-technology.jpeg?h=afdc3185&amp;itok=2jlMpKPq 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2023-04/Blackness-and-technology.jpeg?h=afdc3185&amp;itok=g_3lgRdF" alt="鶹Ƶ researchers investigate the hidden history of 'Black androids'"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2022-01-04T11:03:37-05:00" title="Tuesday, January 4, 2022 - 11:03" class="datetime">Tue, 01/04/2022 - 11:03</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p>Clockwise from top left: Edward Jones-Imhotep, Sarai Rudder, Emily Grenon and Alexandar Offord are exploring the histories of racialized mechanical humans created between the mid-18th and late 20th centuries.&nbsp;</p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/sarah-macfarlane" hreflang="en">Sarah MacFarlane</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/institute-history-and-philosophy-science-and-technology" hreflang="en">The Institute for the History and Philosophy of Science and Technology</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/history" hreflang="en">History</a></div> <div class="field__item"><a href="/news/tags/humanities" hreflang="en">Humanities</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>It began as a series of isolated references –&nbsp;the term would appear in a historical article here, a book there. <strong>Edward Jones-Imhotep</strong> soon began noticing references to “Black androids” everywhere.</p> <p>They were historically called “automata," automatic machines&nbsp;that, in this case, replicated the actions of human beings.</p> <p>One example is an 1868 patented design of&nbsp;a mechanical person –&nbsp;a&nbsp;racist depiction of a Black man –&nbsp;wearing a top hat, smoking a pipe and pulling a cart.&nbsp;</p> <p>“These were references to real physical objects –&nbsp;not science fiction but actual machines that had been built in the form of Black humans,” says Jones-Imhotep, associate professor and director of the&nbsp;Institute for the History &amp; Philosophy of Science &amp; Technology&nbsp;at the Faculty of Arts &amp; Science.</p> <p>“The androids were part of the racist ideologies of their time. They portrayed Black people in pastoral, leisurely and non-technological roles. But it didn't seem that anybody had looked into what role they might have played in creating racial mythologies –&nbsp;particularly one of the most prominent and harmful mythologies: the idea that technology is opposed to Blackness.”</p> <p>To help him explore this history, Jones-Imhotep turned to the&nbsp;Jackman Scholars-in-Residence program, a four-week research fellowship for upper-year undergrads in the humanities. In 2017, a year after its launch, the program received 1,000 applications for 50 openings, highlighting the need for additional research opportunities –&nbsp;a call answered by a $1-million gift from Bader Philanthropies, Inc. in 2018. &nbsp;</p> <p>Jones-Imhotep assembled a team of eight research assistants to undertake the Black Androids project. Their initial goal was to identify and document racialized automata created between the mid-18th and late 20th centuries.</p> <p>In the first two weeks alone, they discovered more than 100 androids. There are now around 150 androids in the team’s database, which they plan to continue building and refining.</p> <p>“The most surprising part has been the sheer number of discoveries we have made. It is almost as though we have uncovered a hidden universe that exposes the deep racial roots of societies worldwide,” says <strong>Sarai Rudder</strong>, a research assistant and third-year undergraduate at Trinity College who is majoring in peace, conflict and justice, with minors in sociology and critical studies in equity and solidarity.</p> <p>As their research evolved, the team realized&nbsp;that the androids were part of another hidden history.</p> <p>“We discovered that, if we looked at the machinery beneath the androids’ surface, the same technologies that physically drove their racist depictions were, in other contexts, part of the rich technological experiences of Black peoples at the time –&nbsp;that’s what we want to explore,” Jones-Imhotep explains.</p> <p>Their current investigation, which forms the basis of a Social Sciences and Humanities Research Council Insight Grant proposal, focuses on New York City between 1830 and 1930 and has two main goals.</p> <p>“The idea is to first create a digital historical map of the androids’ movements, their histories and the interconnections they have with the social and technological development of New York City over the course of that century,” says Jones-Imhotep. “We then use the androids’ technology as a portal to try to understand the lived technological experiences of Black New Yorkers, and how those experiences shatter the racist depictions of the androids themselves.”</p> <p>New York is the focus of the project because many of the androids were manufactured and displayed there during this period. But the city’s significance runs deeper.</p> <p>“There is a kind of cultural spectacle in New York during the late 19th century that surrounds the androids, linking them to the racist vaudeville and minstrel theatre happening at the time,” Jones-Imhotep says. “The androids play a very important part in the construction of race within New York City.”</p> <p>The team’s interest in New York also stems from a gap in existing scholarship about its history. Researchers have studied the technological development of the city, on one hand, and the history of Black people in New York,&nbsp;“but nobody has written about these two threads as if they’re part of the same history,” he says.</p> <p>Working together to explore this largely unresearched area has been a meaningful part of the experience for the team.</p> <p>“The collaborative, creative nature of the work makes every new discovery resonant with possibilities. It’s like discovering a parallel dimension,” says <strong>Alexander Offord</strong>, a research assistant who earned his honours bachelor of arts in the history and philosophy of science earlier this year as a member of&nbsp;Woodsworth College. “I remain constantly enthralled by how wonderful and strange our discoveries are –&nbsp;from criminal geniuses and haunted dolls to secret societies and underground resistances.”</p> <p>“Professor Jones-Imhotep encouraged us to follow whatever paths our research took us on. Each of us has added pieces to the whole that we never could have anticipated,” says <strong>Emily Grenon</strong>, a research assistant and fourth-year undergraduate studying history and material culture as a member of&nbsp;Victoria College. “It’s been a wonderful experience to be part of a project that covers something totally new, and to work with such brilliant people while doing it.”</p> <p>The team plans to publish their findings widely, including in scholarly articles, conference papers, a book and even a graphic novel to share their research with a broader audience.</p> <p>“The point of the graphic novel is to highlight this hidden history of Black technological life,” says Jones-Imhotep. “The consequences of this are really important because the myth that sees technology as opposed to Blackness is partly created in the moment we’re investigating. It’s one of the most pernicious, damaging myths we have — and it persists to this day.</p> <p>“It denies educational opportunities for Black peoples. It justifies exclusion. It supports technologies that target and surveil African-descended peoples. That’s what makes this investigation so critical and urgent. We are working to rewrite the histories of Blackness and technology. By changing those histories, we hope to change the future.”</p> <p>Adds Rudder: “When we think of technology, it is often names like Thomas Edison, Alexander Graham Bell and Steve Jobs that come to mind. It almost feels as though Black people and other people of colour are missing from historical narratives surrounding technological advancement.”</p> <p>Jones-Imhotep says the Black Androids project highlights the histories of underrepresented groups and supports&nbsp;the research of Black scholars at 鶹Ƶ.</p> <p>“<a href="https://brn.utoronto.ca/">The&nbsp;Black Research Network</a>&nbsp;that was <a href="/news/u-t-s-black-research-network-support-black-scholarship-and-excellence">recently launched</a> here at the University of Toronto aims to support and nurture a network of Black researchers and the amazing research they're doing,” he says. “That's crucial given the historical experiences and lack of support Black scholars have had at universities worldwide.</p> <p>“One of the things this project hopes to demonstrate is the remarkable research that’s possible with focused resources and with the right support.”</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Tue, 04 Jan 2022 16:03:37 +0000 Christopher.Sorensen 301093 at Scholars-in-Residence program offers 鶹Ƶ undergraduate students immersive research experience /news/scholars-residence-program-offers-u-t-undergraduate-students-immersive-research-experience <span class="field field--name-title field--type-string field--label-hidden">Scholars-in-Residence program offers 鶹Ƶ undergraduate students immersive research experience</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2019-08-13-scholars-in-residence2-resized.jpg?h=afdc3185&amp;itok=-an6sKyW 370w, /sites/default/files/styles/news_banner_740/public/2019-08-13-scholars-in-residence2-resized.jpg?h=afdc3185&amp;itok=f-11an9- 740w, /sites/default/files/styles/news_banner_1110/public/2019-08-13-scholars-in-residence2-resized.jpg?h=afdc3185&amp;itok=sNSHMWjY 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2019-08-13-scholars-in-residence2-resized.jpg?h=afdc3185&amp;itok=-an6sKyW" alt="Photo of Scholars-In-Residence"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>noreen.rasbach</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2019-08-13T13:24:54-04:00" title="Tuesday, August 13, 2019 - 13:24" class="datetime">Tue, 08/13/2019 - 13:24</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item">鶹Ƶ students worked with Hakob Barseghyan, assistant professor in the Institute for the History and Philosophy of Science and Technology, to analyze belief systems throughout history using a diagramming approach (photo by Adriana Leviston)</div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/jovana-jankovic" hreflang="en">Jovana Jankovic</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/institute-history-and-philosophy-science-and-technology" hreflang="en">The Institute for the History and Philosophy of Science and Technology</a></div> <div class="field__item"><a href="/news/tags/english" hreflang="en">English</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/history" hreflang="en">History</a></div> <div class="field__item"><a href="/news/tags/humanities" hreflang="en">Humanities</a></div> <div class="field__item"><a href="/news/tags/jackman-humanities-institute" hreflang="en">Jackman Humanities Institute</a></div> <div class="field__item"><a href="/news/tags/trinity-college" hreflang="en">Trinity College</a></div> <div class="field__item"><a href="/news/tags/undergraduate-students" hreflang="en">Undergraduate Students</a></div> <div class="field__item"><a href="/news/tags/victoria-college" hreflang="en">Victoria College</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><div>It’s&nbsp;a unique, immersive research experience that allows undergraduate students to&nbsp;spend four weeks&nbsp;working on projects with faculty members from across the humanities.</div> <div>&nbsp;</div> <div>This summer’s Scholars-in-Residence (SiR) program, run by the Jackman Humanities Institute and Victoria College, marked&nbsp;the&nbsp;fourth year that successful applicants took part in the residency in humanities and humanistic social science research.</div> <div>&nbsp;</div> <div>Students from across the Faculty of Arts &amp; Science, as well as 鶹Ƶ Mississauga and 鶹Ƶ Scarborough, participated in this year’s program. Participants are granted free accommodation in on-campus housing and a meal plan for the duration of the program, as well as a $1,000 Jackman Scholar Award.</div> <div>&nbsp;</div> <div>Participants say the opportunities to contribute to original faculty research projects and build new relationships with peers and professors are the biggest rewards.</div> <div>&nbsp;</div> <div>“SiR is not only an intensive research program, it also allowed me to network and meet an incredible diversity of students outside of my faculty and college,” says Trinity College student <strong>Kat Yampolsky</strong>, a second-year specialist in peace, conflict and justice studies with a double minor in Italian and Arabic. “I would 100 per cent recommend the program to other students.”</div> <div>&nbsp;</div> <div>Since its inception, the program has rapidly expanded. What started as a 20-student pilot in 2016 on the St. George campus has now grown into 100 participating students, with some research teams hosted at 鶹Ƶ Mississauga and 鶹Ƶ Scarborough in the last two years.</div> <div>&nbsp;</div> <div>Faculty members from across the humanities work with student researchers on a variety of projects. Some students helped&nbsp;transcribe and annotate 18<sup>th</sup>-century letters written in French between journalists and publishers, while others worked on&nbsp;an audience research study to examine the representation of minorities in the CBC sitcom <em>Kim’s Convenience</em>.</div> <div>&nbsp;</div> <div>Students paired with <strong>Hakob Barseghyan</strong>, assistant professor in the Institute for the History and Philosophy of Science and Technology, analyzed particular belief systems throughout history using a diagramming approach to turn historical data into easily comprehensible visualizations.</div> <div>&nbsp;</div> <div>Fourth-year Trinity College student <strong>Jessica Rapson</strong>, who studies psychology and philosophy, diagrammed the Aryan physics worldview, a product of Nazi ideology that existed briefly in the 1930s.</div> <div>&nbsp;</div> <div>“Few people know about the strange anachronistic scientific beliefs that flourished in the Nazi regime,” says Rapson. “I hope my project will help historians analyze why well-respected scientists –including Nobel Prize-winners – accepted the extremely questionable scientific theories proposed by Aryan science. These include parapsychology – a belief in things like ESP or telepathy – and the notion that the most fundamental particle in the universe is ice.”</div> <div>&nbsp;</div> <div><strong>Kye Palider</strong>, a Victoria College student who is entering his fourth&nbsp;year this fall as a physics and philosophy joint specialist with a history and philosophy of science minor, focused his diagramming on 20<sup>th </sup>-century<sup>&nbsp;</sup>scientific methodology in the physical and mathematical sciences.</div> <div>&nbsp;</div> <div>“The kind of diagramming we’re doing can prove useful in pedagogy, clarifying arguments and finding weak points in essays,” he says of Barseghyan’s project.</div> <div>&nbsp;</div> <div>Program director <strong>Angela Esterhammer</strong>, also a professor in the department of English, says SiR has been a “transformative experience” for students and faculty alike.</div> <div>&nbsp;</div> <div>“Students say the experience gives them new insight into critical thinking, community-engaged research and what it means to be a scholar working with primary sources,” she says.</div> <div>&nbsp;</div> <div>“The program also opens up career choices by demonstrating how research skills in the humanities can be applied in broader social, economic and cultural contexts. Meanwhile, faculty find that the results produced by their students exceed their expectations. An intense month of working with these talented undergraduates often reveals new approaches to their research.”</div> <div>&nbsp;</div> <div>Another SiR project is <strong>Timothy Sayle</strong>’s “Unlocking the Nuclear Vault,” an ongoing, multi-year examination of formerly top secret documents related to Canada and nuclear weapons in the Cold War. Sayle, an assistant professor in the department of history, has led SiR groups in past years and continues to believe in the program’s merits.</div> <div>&nbsp;</div> <div>“This is really all about our tremendous undergraduates,” says Sayle. “One of the most exciting things as a supervisor is to see how our students work together as a team. It’s incredible.”</div> <div>&nbsp;</div> <div>One of Sayle’s students, Victoria College’s <strong>David de Paiva</strong>, is majoring in political science and urban studies, with a minor in Russian literature in translation – a fitting combination for the Cold War project. He was particularly enthusiastic about the groundbreaking nature of the work.</div> <div>&nbsp;</div> <div>“As we are some of the first scholars outside government to see these files,” says de Paiva, “we’re able to actively participate in the revision and rewriting of history from an accurate, informed perspective. As an undergrad, this is a very unusual and rewarding experience.”</div> <div>&nbsp;</div> <div>Regardless of their academic accomplishments, SiR students all commented on the lifelong friendships and collaborative relationships they formed during the intensive month-long residency.</div> <div>&nbsp;</div> <div>“I’m a humanities student,” says Yampolsky, “and one of my closest friends became a math, physics and philosophy student. Where else would I have had the opportunity to work with someone from a polar opposite academic background?”</div> <div>&nbsp;</div> <div>“SiR is an opportunity to befriend like-minded student researchers and professors and do so in an atmosphere free of the stress of grading or assignment deadlines,” says de Paiva. “It’s how learning should be all the time.”</div> <div>&nbsp;</div> <div>&nbsp;</div> <div>&nbsp;</div> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Tue, 13 Aug 2019 17:24:54 +0000 noreen.rasbach 157532 at