Sponsored Content by Oracle

It’s More Than GPUs: Suno Founder Talks Infrastructure Choices for Generative AI Startups

Startup Suno AI helps consumers generate their own music online with a very simple interface. Unlike many startups that focus on text-based generative AI, Suno takes on the very different problem of building, testing, and serving models for audio. The Cambridge, Mass. company employs Oracle Cloud Infrastructure (OCI) AI infrastructure and other services to create and run these models.

Below, Leo Leung, Vice President, Oracle Tech and OCI chats with Mikey Shulman CEO of Suno AI, about what generative AI startups want and need from their providers.  The interview was edited for length and clarity.

Leung: What should AI startup founders be thinking about when it comes to foundational technology and infrastructure?

Shulman: The first thing would be picking very carefully where you want to innovate and that really means picking very carefully where you don’t want to innovate. Before Suno, we learned that things like system administration aren’t really places where you move the needle. So, we focus all day and all night on figuring out the right way to model audio and plugging that in. We are open about the fact that we borrow so much from the open-source community for things like making transformer models on text and it’s lovely to not have to reinvent the wheel there. We don’t just think about models that map A to B since that’s not how most humans think about interacting with these things. Ultimately, we are trying to build products that people love to use and figuring out what the foundational technology is to help ensure a pleasurable experience for the user.

Leung: It would be interesting to hear more about the data of music and the different types of workloads music represents. Can you talk a bit more about that and how that maybe influenced your choice of infrastructure or technology underneath?

Shulman: Music, or audio in general, is very far behind images and text in terms of modeling. The key problem is how to represent audio in a way that should be intelligible to transformers? There are hiccups, one being transformers work on what are called tokens, but they’re discreet things and audio is not a discreet signal, it’s a continuous wave. Furthermore, the problem for audio, especially high-quality audio, is it’s sampled at either 44 kilohertz or 48 kilohertz —one second of audio will have roughly 50,000 samples. That’s just way too many samples and we need some way to take this very high frequency signal and kind of smush it down into something more manageable. We spend a lot of time innovating on what is the right way to take this very quickly sampled continuous signal and represent it as a much more slowly sampled discreet signal. 

Leung:  Did that influence the kind of infrastructure you needed or are you thinking about the same infrastructure but again trying to reduce the data to a place where you could put it into those models?

Shulman: Definitely. Just like any other machine learning model, these things aren’t super cheap to run. You want to do things quickly both in production, but also even when you’re just experimenting. We are constantly trying to make things better so having some elasticity of compute, having availability of compute is important. 

Leung: That is a good lead into my next question, which is what needs have changed for you and the company that you couldn’t have predicted as you scaled?

Shulman:  When we started the company, the first thing we did was buy the biggest GPU box that you can safely plug into a home outlet and start to train the initial models there. That box sits unplugged in the next room. We did not really anticipate just how much scale matters for your models, your experiment throughput, and the way you roll things out to people. This is a cliche, but humans are very bad at reasoning about exponential growth. And so, despite having a PhD in physics, I too am very bad about reasoning about exponential growth. That certainly caught us by surprise. We also did not realize the extent to which products can come to market that take care of some of these concerns. For example, when we first logged into our Oracle cluster, it was like you just had everything we needed there. It was kind of a weird moment because it was not just a machine that you’re starting to do everything. It is a cluster. It is like this was a product built for people like me. I get all the creature comforts that I need to do really good work.

Leung: When I talk infrastructure, everyone gravitates just towards the GPUs, but there’s more than just processors. From your perspective, what other important components of AI infrastructure do you leverage?

Shulman:  I think one concentric circle out from GPUs is all the fit and finish that is on our cluster. Whether that is the ability to add users, launch jobs, have network-attached storage, have fast SSDs, all the things that let us utilize the GPUs, that’s amazing. Storage buckets for larger bits of data or user-generated content, etc. We need all kinds of things to make the products run smoothly without GPUs on the training side. Whether that’s a service to deliver content quickly, whether it’s user management and queue management and lots of building blocks–some of them we build, some of them we buy. 

Leung: What are those special problems and solutions that you feel are specific to generative AI?

Shulman: This is an area that is quickly evolving and things that you can take for granted today, you’re not necessarily sure you can take for granted tomorrow. Can I fit my model on one card today? Maybe I can and in a month I can’t which would screw everything up. Something like Modal is amazing. It lets us launch workers on GPUs extremely easily.

Look, generative AI is very compute intensive, and GPUs are annoying for software developers. They break a very sacred hardware-software abstraction barrier, and that has a way of rearing its head everywhere. I think that’s why a lot of this stack can be a little difficult to navigate.

But it’s not all GPUs, there’s also a ton of CPU work that goes into these things. There’s audio processing in general. When I daydream, it’s like maybe my cloud provider has made me not really care what cards I’m using. That would be swell the same way I don’t necessarily care if I get spun up on an Intel CPU or an AMD CPU in my cloud machine. Why should I care exactly what card it is? 

Leung: Going beyond the tech, what other support should AI startups be looking for from their service providers?

Shulman:  We’re always asking: “How much pain can my provider take away so I can focus on the stuff that is my comparative advantage?” Every company’s answer here is going to be different. In a more research-heavy company, there’s going to be a lot of research tooling and experiment management and job management, etc. In a less research-heavy company, maybe it’s the world’s fastest CDN because I need to deliver content to people. I’m always thinking about what are we doing that we shouldn’t be doing and how do we stop doing that? And very often there are solutions out there, you just have to know where to look.

Leung: My final question is how should fast-growth AI companies think about costs?

Shulman: For AI companies, a big fraction of your spend is compute, so that’s something you have to think about judiciously. Sometimes you can find some slightly cheaper solutions, but the cost savings can be far outweighed by the reliability and the flexibility of going with a real provider. There’s a lot of things popping up and going away, and we want to be around in 10 years so that means we should probably be trying to do business with companies that are also going to be around in 10 years. If you have a plan to start using somebody and then get off them in a year, that needs to be a very conscious decision and not one that we should make lightly. That’s part of why we picked OCI – trust.

Are you building your company and evaluating cloud provider options? Learn more about OCI’s broad selection of ISVs that offer AI services to help accelerate your development and deployment here.

 


This article is presented by TC Brand Studio. This is paid content, TechCrunch editorial was not involved in the development of this article. Reach out to learn more about partnering with TC Brand Studio.

More TechCrunch

The Appellate Court of Montenegro ruled on Thursday that Terraform Labs co-founder, Do Kwon, should be returned to his home country, South Korea. The ruling confirmed an earlier decision in…

Terraform Labs co-founder and crypto fugitive, Do Kwon, set for extradition to South Korea

A day after Meta CEO Mark Zuckerberg talked about his newest social media experiment Threads reaching “almost” 200 million users on the company’s Q2 2024 earnings call, the platform has…

Meta’s Threads crosses 200 million active users

TechCrunch Disrupt 2024 will be in San Francisco on October 28–30, and we’re already excited! Disrupt brings innovation for every stage of your startup journey, and we could not bring you this…

Connect with Google Cloud, Aerospace, Qualcomm and more at Disrupt 2024

The tech layoff wave is still going strong in 2024. Following significant workforce reductions in 2022 and 2023, this year has already seen 60,000 job cuts across 254 companies, according…

A comprehensive list of 2024 tech layoffs

Intel announced it would layoff more than 15% of its staff, or 15,000 employees, in a memo to employees on Thursday. The massive headcount is part of a large plan…

Intel to lay off 15,000 employees

Following the recent lawsuit filed by the Recording Industry Association of America (RIAA) against music generation startups Udio and Suno, Suno admitted in a court filing on Thursday that it did, in…

AI music startup Suno claims training model on copyrighted music is ‘fair use’

In spite of a drop for the quarter, iPhone remained Apple’s most important category by a wide margin.

iPad sales help bail out Apple amid a continued iPhone slide

Molly Alter wears a lot of hats. She’s a mocumentary filmmaker working on a project about an alternate reality where charades is big business. She’s a caesar salad connoisseur and…

How filming a cappella concerts and dance recitals led Northzone’s newest partner Molly Alter to a career in VC

Microsoft has a long and tangled history with OpenAI, having invested a reported $13 billion in the ChatGPT maker as part of a long-term partnership. As part of the deal,…

Microsoft now lists OpenAI as a competitor in AI and search

The San Jose-based startup raised $60 million in a round that values it lower than the $500 million valuation it garnered in its most recent round, according to multiple sources.

Sequoia-backed Knowde raises Series C at a valuation cut

Self-driving technology company Aurora Innovation is looking to raise hundreds of millions in additional capital as it races toward a driverless commercial launch by the end of 2024.  Aurora is…

Self-driving truck startup Aurora Innovation to sell up to $420M in shares ahead of commercial launch

X (formerly Twitter) can no longer be accessed in the Mac App Store, suggesting that it has been officially delisted.  Searches for both “Twitter” and “X” on Apple’s platform no…

Twitter disappears from Mac App Store

Google Thursday said that it is introducing new Gemini-powered features for Chrome’s desktop version, including Lens for desktop, tab compare for shopping assistance, and natural language integration for search history.…

Google brings Gemini-powered search history and Lens to Chrome desktop

When Xiaoyin Qu was growing up in China, she was obsessed with learning how to build paper airplanes that could do flips in the air. Her parents, though, didn’t have…

Heeyo built an AI chatbot to be a billion kids’ interactive tutor and friend

While the company was awarded a massive, $4.2 billion contract to accelerate Starliner development in 2014, it was structured as a “fixed-price” model.

Boeing bleeds another $125M on Starliner program, bringing total losses to $1.6B

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Summer road…

Anthony Levandowski bets on off-road autonomy, Nuro plots a comeback and Applied Intuition gets more investor love

Google’s new features include Gemini in BigQuery and Looker to help users with data engineering and analysis.

Google Cloud expands its database portfolio with new AI capabilities

Rad Power Bikes, the Seattle-based e-bike startup that has raised more than $300 million from investors, went through another round of layoffs in July, TechCrunch has exclusively learned. This is…

VC darling Rad Power Bikes hit with another round of layoffs

Five years ago, as robotaxis and self-driving truck startups were still raking in millions in venture capital, Anthony Levandowski turned to off-road autonomy. Now, that decision — which brought the…

Why Anthony Levandowski returned to his off-road autonomous vehicle roots with AV startup Pronto

Commercial space station company Vast is building a private microgravity research lab as part of its wider Haven-1 station plans. The module is set to launch no earlier than the…

Vast plans microgravity lab on its Haven-1 private space station

Google Cloud is giving Y Combinator startups access to a dedicated, subsidized cluster of Nvidia graphics processing units and Google tensor processing units to build AI models. It’s part of…

Google Cloud now has a dedicated cluster of Nvidia GPUs for Y Combinator startups

Open source compliance and security platform FOSSA has acquired developer community platform StackShare, the company confirmed to TechCrunch.  StackShare is one of the more popular platforms for developers to discuss,…

Open source startup FOSSA is buying StackShare, a site used by 1.5M developers

Ola Electric and FirstCry are set to test investor appetite with public listing, both pricing their shares below their previous valuation asks.

Indian startups gut valuations ahead of IPO push

The European Union’s risk-based regulation for applications of artificial intelligence has come into force starting from today.

The EU’s AI Act is now in force

The company also said it has received regulatory clearance to start Phase 2 clinical trials for a new drug in the U.S. later this year.

Healx, an AI-enabled drug discovery platform for rare diseases, raises $47M

The European Commission (EC) has given the go-ahead to HPE’s planned megabucks acquisition of Juniper Networks.

EU greenlights HPE’s $14B Juniper Networks acquisition

Meta, which develops one of the biggest foundational open source large language models, Llama, believes it will need significantly more computing power to train models in the future. Mark Zuckerberg…

Zuckerberg says Meta will need 10x more computing power to train Llama 4 than Llama 3

Axle Energy is a B2B, back-end infrastructure business focused on connecting flexible assets, such as electric vehicles and home batteries, to energy markets that aren’t otherwise available for consumers to…

Axle Energy’s sprint to decarbonize the grid lights up with $9M seed led by Accel

OpenAI CEO Sam Altman says that OpenAI is working with the U.S. AI Safety Institute, a federal government body that aims to assess and address risks in AI platforms, on…

OpenAI pledges to give U.S. AI Safety Institute early access to its next model

WhatsApp’s massive 500 million users in India have supercharged Meta’s AI ambitions. Meta CFO Susan Li said Wednesday that India is the largest market in terms of Meta AI usage,…

Meta says India is the largest market for Meta AI usage