I’m writing to you amid the wreck of my well-intentioned start up…

In 2016, I founded a company as a senior at Stanford University. I had spent the three previous summers doing internships in the music industry. These internships had revealed to me how behind the scenes, the ‘culture machine’ struggled with entrenched social inequalities. I was astounded, also, by the lack of advanced technology and wondered what role technology could play. Many close friends of mine at the time were musicians struggling to ‘make it’: despite being talented, it was difficult to get booked—it was even harder to get fair pay given the nature of the gigs. Seeing the industry from these perspectives inspired me to create an AI-fueled “demand forecasting” technology that I hoped would help the situation. I believed my tech would create opportunities that would bolster the now hardly-existent musical middle class, decrease the overbearing power of entrenched monopolies, and eliminate some of the parasitic roles that reduce the feasibility of a creative career in the present day.

After a year of building out our model using a testing data set from a large, infamous ticketing company, we stumbled upon gold: our model could forecast the demand for concerts in advance far more accurately than even the most skilled industry employees. While we celebrated this success, in retrospect, my mistakes at this precise moment make this more of an embarrassment than a success. In order to make our model as best as it could be, we needed access to a massive amount of high-quality ticketing data… and yet, I had failed to see that the only people who had that kind of data were the very monopolies we were trying to undermine. This ultimately meant that if we wanted to make something that worked, the practical use of what we made would be determined by the company that owned it.

As a naïve student, I was inexperienced enough to think at the time that these business partners shared my vision for the tech. Yet this definitely wasn’t the case: it became clear that our potential business partners had no intentions of pursuing our vision. Instead, they wanted to use our AI model to more efficiently drive up ticket prices, which I strongly felt hurt customers and music culture at large. I was personally upset by the possibility that I had created something that would be used to hurt musicians and music lovers, thus damaging the world I identified with most. Disillusioned, I began trying to find an equally viable business partner that wouldn’t use the tech in this problematic way. I was indeed eventually able to find a better home for the team and the model. My startup exited in February of 2018 bound by non-disclosure, so I cannot say much more than that. Today, our model has been trained on their dataset and is working at a success rate beyond what we had even thought possible. …But it still isn’t clear how this tech will ultimately be used by the company. Though our model won’t be used to hurt anyone, our model will certainly not perform the positive social role I had envisioned.

Leave a Reply

Your email address will not be published. Required fields are marked *