Think that you’re being objective about how you’re training your startup’s AI model? Think again.
Whether it’s the data you choose, the sources you’re pulling from, or the features you’re including, all are steps where bias can be introduced. So if your data automatically supports an underlying hypothesis, you should instantly have your guard up.
The truth is, there’s no easy solution for preventing bias. But that doesn’t mean AI and Machine Learning (ML) is doomed to fail. Biases from using AI models at your startup can be proactively prevented if you start thinking about mitigating potential biases on day one.
During this fireside chat, Sift Science CEO Jason Tan will unpack these complexities and outline several proactive steps you can take to make sure bias doesn’t happen in the first place—or course-correct if and when it does. He’ll also answer questions about his path from engineer to CEO and balancing product and culture development through stages of growth.
6:30PM-7:00PM | Check-in
7:00PM-8:00PM | In Conversation with Sift Science CEO, Jason Tan
8:00PM-9:00PM | Networking & Drinks
CEO, Sift Science
A software engineer at heart, Jason was an early engineer at Zillow, Optify and BuzzLabs before his passion for deep machine learning technology prompted him to launch Sift Science.
Jason believes the age of AI is here and is only working until he can create the perfect company - one that can be left alone and operated by a single machine. Off the clock, he’s playing chess or basketball and freestyle rapping.
525 Market Street, 2nd Floor (courtyard entrance)