Insights from a high schooler at NVIDIA’s GTC

Guest post by Jocelin Su (Stanford AI4ALL ‘16)

  • Changemakers Stories
Insights from a high schooler at NVIDIA’s GTC
Jocelin (bottom row left, in blue) with other AI4ALL alums in front of NVIDA’s self-driving car

AI4ALL Editor’s note: Meet Jocelin Su, a 2016 Stanford AI4ALL (formerly SAILORS) alumna. Below, Jocelin shares her learnings from NVIDIA’s GPU Technology Conference (GTC). Eight AI4ALL alumni attended the conference in March 2018, eager to participate in the vibrant speaker sessions and develop their passions in AI through hands-on activities.


Sunlight glinted off the San Jose McEnery Convention Center, the site of the 2018 NVIDIA GPU Technology Conference (GTC), as I approached. GTC, hosted by leading GPU manufacturer and AI company NVIDIA, is an international gathering of tech industry leaders and researchers to discuss impactful developments in AI and machine learning.

Inside the building, 2 sleek autonomous cars greeted visitors at the entrance, green and white NVIDIA banners adorned the interior, and industry professionals from all over the world milled about, contributing to a vibrant atmosphere. With hundreds of sessions on AI applications ranging from healthcare to VR, a keynote from NVIDIA CEO Jensen Huang, a company exhibition hall, and programming labs, the 4-day long conference was bustling with activities and events for tech enthusiasts.

Jocelin and other AI4ALL alums at the conference

As part of AI4ALL’s alumni group, I along with 7 other girls had the opportunity to attend a day of GTC. The first talk we attended was “Combining VR, AR, Simulation, and the IoT to Create a Digital Twin” presented by Lockheed Martin. Contrary to what I imagined, a digital twin is not a digital look-alike of a human but is a virtual model of a physical product or environment. These models can be as varied as the interior of a ship, an airplane, or a planet in outer space. In sensitive or dangerous areas like these, it can be safer and more economical for humans to interact with a digital twin in order to gather information needed to make decisions. Additionally, virtual prototyping using virtual and augmented reality technologies can lead to better-designed products and more efficient manufacturing.

AI4ALL alumni at the emoji face recognition demo

After checking out the demo of a helicopter in AR at the Lockheed Martin talk, we made a stop at an emoji face recognition demo. There, camera software could discern our facial expressions and output happy, sad, angry, disgusted, neutral, or contemptuous emojis next to our faces. Next to it was a chicken model demo, where a machine scanned our 2D camera-captured images and configured a 3D chicken model that could mirror our body movements and even lay golden eggs!

AI4ALL alums participating in self-paced labs at NVIDIA GTC

We then conducted self-paced labs, ranging from the basics of deep learning to advanced image processing. Through Jupyter notebook tutorials, I first learned style transfer, where the artistic style of a certain piece is transferred onto a normal image using machine learning library Torch. Then, I learned an overview of deep learning and object detection in video.

At noon came the most exciting part of the conference: the lunch and exhibition hall! The event featured prominent companies like NVIDIA, Google, Facebook, and IBM, and more than two hundred startups grouped into demo-filled areas like VR Village, Robotics, and Autonomous Cars. We got to see a robot that suctioned up boxes for transportation, a robot that could pick up food orders from the shelf, self-driving cars from NVIDIA and Toyota, a huge self-driving transport vehicle, and even an autonomous tractor.

After checking out the exhibition hall, we attended “The Future of the In-Car Experience” presented by Affectiva. As autonomous vehicles rapidly advance, in addition to features like safe navigation through streets and accurate path-making, these vehicles must be able to detect passengers’ emotional state in order to optimize their experience. AI capable of interpreting facial expressions, speech, and context is essential for this purpose. Affectiva’s Emotion AI solution used deep learning models employing convolutional neural networks and long-short term memory units to detect emotions from video-captured facial expressions, able to perceive acts like smiling, yawning, or sneezing.

Learning the intersection of art and AI through style-transfer

Our third and final talk, “Real-time genetic analysis Enabled by GPU,” was presented by Oxford Technologies, who discussed some of their pocket-sized products that could conduct DNA and RNA analysis in a real-time environment. Those devices relied on nanopore-based genetic sequencing, with applications for clinical diagnosis, infection surveillance, and much more. The use of modern GPU technology helped address challenges in signal processing, data, and analysis.

Learning more about AI, gaining insight on groundbreaking technologies, and experiencing the industry environment firsthand were my greatest takeaways from GTC. I am grateful to AI4ALL and the generosity of NVIDIA for this opportunity, as we were (likely) the only teenage girls present at the conference. The relative homogeneity of GTC participants wasn’t hard to notice, demonstrating the issue that initiatives like AI4ALL hope to tackle. Above all, I am inspired to know that the future holds an incredible array of opportunities within a single generation of AI.


About Jocelin

Jocelin Su is a junior at Evergreen Valley High School in California, and is an avid enthusiast of math and computer science. After attending Stanford AI4ALL in 2016, she was inspired to found an AI Club at her school and create the She.codes workshop program for middle school girls. She is currently conducting a bioinformatics project under the AI4ALL mentorship program.

Help ensure a human-centered and inclusive future for AI by making a tax-deductible donation today. Gifts will be matched for the month of December!