A glimpse into an indie AI product launch

Hey there,

The AI field is currently a vortex of innovation and untested possibilities. It's brimming with unexplored potential, and I for one, have this strong sense of wanting to be involved as things change quickly (aka fomo). That’s why, these days, I spend pretty much all my free time either reading up on the new developments or hacking away on a new idea I have.

There’s also a lot of noise, which I don’t want to contribute to, so that’s why this newsletter is a bit more infrequent than some others and things are more focused and succinct.

Last we spoke, I announced Humanfest.chat — an interactive thought experiment, where you jump into a chat with either a complete stranger or an AI-powered bot.

Today, I want to pull back the curtain a bit on the story behind that project and how it all transpired.

I’ll also share some of the most incredible AI-related clippings that I have come across during my countless hours’ worth of research in the space.

📖 Story time: A glimpse into an indie AI product launch

It was a rainy Tuesday.. 😆 jk

It was actually a weekend, and that’s when I usually start working on a fresh new idea. And I really liked the idea of Humanfest because of the interactive nature of it. What was important for me was to also do something that was brand spanking new and hadn’t been done before.

So all that was left was to actually make it happen (as in code it) and to release it to the world. I worked on that project for two days straight, fully immersed. I barely ate, slept, or interacted with anyone during that weekend. All that I could think of was the project and how to actually put it together. Luckily, I have previously done a lot of work with chat and messaging apps, so that helped.

With projects like this one, once the technical aspect of things is complete, I usually follow up with a polish stage, where I work on the UI/UX, add logos, graphics, styling, etc. — basically, making it presentable. So, when I finally got to this stage, I decided to keep the polish to the minimum in order to launch it as quickly as possible.

Finally, launch day arrived. I woke up, checked my phone (I know.. A habit I have been trying to kick for a while) and literally, the first email I see is one from Product Hunt titled “Human or AI”.

“Here we go,” I thought. The email featured what basically is the same thing as Humanfest, except, it’s called Human or Not 🤦🏻‍♂️

Of course, the products are not entirely the same, with the most notable difference being that Human or Not is a game where you get to see if the interaction you just had was with a human or AI, while Humanfest is more of a thought experiment and it deliberately does not tell you the nature of your counterpart. It’s scary how sometimes you literally just don’t know!

I won’t lie — the fact that Human or Not got released first was a very demotivating moment, but I proceeded with the launch of Humanfest anyway, and I’m happy that I did. I received some great feedback from users and learnt a lot from the whole process, while adding another AI-powered project to my arsenal.

At the end of the day, what’s important for me at this stage is to keep building, keep searching for fresh new ideas in the space, and putting the rounds in of actually launching products to the public ⚡️

✨ AI shortlist

Here are some of the most interesting AI-related news that caught my eye.

Vercel impresses once again with its ability to simplify the complex into easy to use tools. Of course, the catch is that you further tie yourself into their ecosystem, but nonetheless, this is great for getting some simple AI use cases up and running easily. Includes streaming, chat interfaces, and first-class support for OpenAI, LangChain, Anthropic, and Hugging Face.

This is great news for anyone working with the OpenAI APIs. On top of the context increase for GPT 3.5 Turbo to 16k (which is also blazing fast!), they finally made it respect the system messages more (meaning you can configure your custom chat experience better).

Turns out, that previously, it would take roughly $50M to embed the entire internet 🤯 But now, because they’ve made embeddings 70% cheaper, that number goes down to $12M. Crazy. With the use of embeddings + context injection, you can easily “extend” the knowledge of existing OpenAI models like GPT-3.5 or GPT-4 when building custom chat experiences. Supabase has a nice article explaining these concepts.

Meta AI has developed a new generative AI model for speech named Voicebox, which can handle tasks it wasn't specifically trained for, with high performance. Unlike prior models that needed task-specific training, Voicebox learns from raw audio and an accompanying transcription. It is capable of synthesizing speech in six languages, performing noise removal, content editing, style conversion, and diverse sample generation​.

Meta AI also detailed how they’ve built a highly effective classifier that can distinguish between authentic speech and audio generated with Voicebox.

There are many exciting use cases for generative speech models, but because of the potential risks of misuse, we are not making the Voicebox model or code publicly available at this time.

Meta AI

Last but not least, I’ll leave you with this new AI-enabled trend of artistic QR codes. This brings a whole new dimension to the boring old QR code.

That’s it for now, thanks for reading. If you have any feedback, feel free to reach out to me on Twitter.