Logspot

ShowHue

ShowHue was founded by Hannie in 2017, and Ire joined as a co-founder & CTO in 2018. We focused on computer vision models and applied them to the marketing and e-commerce industry. Due to limited resources, we decided to close the company in 2023.

This page shares our stories and insights learned across product development, R&D, business strategy. Startups are chaotic and constantly trying to meet user demands, leaving little room for reflection. Rather than open-sourcing our code, we believe the lessons we've learned hold greater value.

A heartfelt thank you to Ire, Jason, Ruby, Wan, Vivian, YC... Without you, none of this would have been possible.

You can read our story by scrolling down in reverse chronological order. Or, you can listen to the journey in the AI-generated podcast below.

Backstage & Beyond

End of the Journey

As a CEO, I make decisions daily - for users, for teams. But deciding to shut down the company? That was the hardest call of my life.

Throughout this journey, What Went Well was that we successfully applied computer vision (CV) technology across wildly diverse domains: B2B to B2C, edge computing to APIs. This gave us rare experience in developing products for the next AI era.

What Didn't Go Well was our fundraising. When you build a team capable of training models, meaning you're at the foundation layer, you need resources - not only people but also computing power. All of this requires significant capital.

What We've Learned can be summarized in three points:

First, pick the right market. This is the most crucial yet most challenging thing to do. Because the "right market" depends on your prediction of the future, and why you are obsessed with that future(your vision). It also requires co-founders who can attract exactly the right talent.

Second, how to go to the market? Go-to-Market ≠ “Build It and They’ll Come” Our early assumption: “Our tech is revolutionary – users will flock post-launch!” But the reality is "no exposure, no existence". Without users, you get no feedback. Without feedback, no growth. Launching isn't the finish line - it's barely step 0.5.

Lastly, find the right people to join you. Looking back on this journey, we assembled Taiwan's brightest in R&D, business and design. Other founders constantly asked us why our team members stayed for so long(with compensation lower than Silicon Valley packages).

From our perspective, culture is the thing founders most easily overlook. Sometimes, it is the culture that paves the path to your vision. Even now as ShowHue winds down, the culture we built persists in every member(even interns who stay for just a few months).

This is why we are journaling our story here. You now have the DNA; you can re-encode it to develop the next era of AI.


We sincere regards,

Co-founder & CEO, Hannie Liu

Backstage & Beyond

Awards

Over the past five years, we’ve proudly been featured on the Taiwan AI Ecosystem Map for three consecutive years. We also won numerous awards, including the Asia Open Data Competition, the AI Application Competition, and the AI Internship Champion. We were also supported by Taiwan's National Center for High-Performance Computing, which let us use their cloud to train our models for free – which actually saved us over $200,000!

At first, chasing awards was a solid strategy – great for credibility and scoring partnerships/clients. But you have to refocus on actually building your product to avoid falling into the trap of just chasing benchmarks. (You know, like when companies just compete to score a tiny bit better on leaderboards?) Users don't pay you for a 2.5% improvement in accuracy; they pay for end-to-end value delivery.

But we're still grateful for those prizes; they really helped us improve our model performance (and maybe kept the team fed!).

Team

Backstage & Beyond

Accelerators

Many founders ask if it's necessary to join an accelerator. We'd say if you and your co-founders are first-time founders, then "YES," absolutely. We were lucky enough to get into 500 Global Taiwan, AppWorks, and Facebook Accelerator for Creative Apps. Each program played a crucial role at different stages of our journey. We met incredible investors, mentors, and fellow founders, which broadened our horizons and brought us closer to the global market.

Press

Backstage & Beyond

Showcases

Representing Taiwan as an AI company and exploring markets like Singapore, Japan, and the United States truly taught us how to find business partners and communicate with potential customers. Every overseas demo day had us hyped – even when we survived on 7-Eleven vending machines to stretch our runway.

Today, ShowHue’s crew is scattered across the globe. We hope that in the future, we can continue to make a significant impact in the AI field.

Singapore

Product & Customer

Sell APIs with business partners

Once we had stable models (image generation APIs) that satisfied users, we had the opportunity to collaborate with cloud service partners who had a large client base in Asia. Around the same time, we were accepted into the Microsoft for Startups Founders Hub, receiving $200,000 in Azure credits and $1,000 in OpenAI credits.

The sales cycle for selling APIs is similar to selling cloud services, so if we wanted to scale faster, we needed to hire people with that kind of sales experience. However, it's undeniable that the profit margin for APIs is higher than for SaaS. Unlike SaaS, they’ve got built-in economies of scale. Once customers integrate APIs, they are less likely to switch to a competitor, leading to higher retention. It's a solid business model.

This brings us close to the end of the story. While the world lost its mind over AI, racing to launch products and scale empires… we were struggling to afford the GPU servers needed to run our image generation models.

But we are still incredibly grateful that, in the end, we had a good product, found paying customers, and established great partnerships.

Research & Development

No need to prompt for image generation

OutputImage

As more platforms started using our APIs, our top priority became helping them generate images seamlessly. Back in 2022-2023, it still took 30 seconds to a full minute to generate images (usually four per request).

To streamline the process, we aimed to eliminate the need for manual prompt typing. Our algorithm was designed to detect the product in a user's uploaded image and automatically find suitable scenes. The goal was "no-edit" outputs right from the start. This approach required us to narrow down the product categories, meaning some categories performed better than others.

Choosing these categories is an art; it requires considering factors beyond just technology or algorithms. Here, we called it RAG(Retrieval-Augmented Generation) to make it easier to understand the mechanism we used to generate images, which is kind of "dimensionality reduction generation" approach – more efficient and with better results.

While proud of these advancements, we’re eager to see even more efficient methodologies emerge in this rapidly evolving field.

Background and Objectives

The project was initiated to address the challenge of generating product images with suitable backgrounds and scenes without requiring user prompts. The primary objective was to build an image database that serves as a reference for the generation model to create images meeting users' needs. By leveraging RAG, the project aimed to provide a structured way to retrieve reference images based on the desired style and composition.

Significance and Reviews

We filtered 1,179,209 images down to 611,483 images and automatically categorized them into 5 categories.

  • What Went Well:
    • Strategic Model Choices: The decision to use Google's Product Search API and the LAVIS image-to-text model proved to be cost-effective and efficient compared to building custom models.
    • Image Categorization: Developed a robust system for categorizing images into useful categories such as Amazon Style, Straight View, and Top View, which facilitated targeted image generation.
    • Automated Filtering: Implemented automated filtering using Vision API for object detection and attribute detection, which streamlined the process of dropping unwanted images.
  • What Didn't Go Well
    • Initial Classification Issues: The initial attempts to classify images using a custom-trained model were ineffective, with accuracy around 52-53%, equivalent to random guessing.
    • Complex Definition Process: Defining the criteria for dropping images was challenging, especially for detecting partial objects or specific human body parts in images.
  • What We've Learned
    • High Costs and Complexity of Custom Models: Building a custom content-based image retrieval (CBIR) system was costly and time-consuming, leading to the decision to use Google’s existing product search API instead.
    • Value of Existing Models: Leveraging pre-existing models and APIs can save time and resources compared to building custom solutions.
    • Impact of Image Characteristics on Generation: Factors like perspective (straight view vs. top view) and composition significantly affect the quality and realism of generated images.
  • What Still Puzzles Us
    • Role of Style in Image Generation: It would be interesting to explore how different styles (e.g., minimalist, vintage) could be incorporated into the generation process and how they might impact user preferences.
Research & Development

Optimize and evaluate models

Thankfully, Stable Diffusion 2.0 was released in November 2022. This finally allowed us to optimize our model for enhanced output quality. However, when we launched our image generator API, enterprise customers began reporting inconsistent image quality – clear signs the product wasn't meeting market demands. We realized that even with SD2 as our foundation, we still needed to design an image evaluation system to pinpoint where and how to optimize the model.

We think evaluating image generation is way harder than evaluating language models. There’s no standard or benchmark like MMLU for images. Plus, it really depends on the application. For example, our customers are often focused on realistic images, but in some cases, they're looking for more creative outputs – meaning a complete 180 in style. Balancing these different needs is still a puzzle for us.

Background and Objectives

To address feedback of poor image quality from enterprise customers after our API launch, we recognized the challenge in evaluating different AI model versions, which hindered effective communication and performance assessment. Therefore, we aimed to enhance our image generator API's performance to meet user needs by automating cross-version model testing in beta environments. This automated system, visualized through a gallery, would allow our team to systematically evaluate model performance and tackle specific issues like unintended shapes, poor facial rendering, lack of diversity, size distortions, and instability – all contributing to our goal of delivering a more reliable, higher-quality image generation experience.

Significance and Reviews

  • Boosted output quality pass rate from 26.98% to 62.75% by integrating object detection with GPT-4 for advanced LLM-enhanced image optimization.
  • Conducted beta testing on 11 generation model versions for thorough evaluation.
  • Successfully auto-generated over 12,912 testing images
  • What Went Well:
    • Solving the issue of single-image generation per call by using RAG (Retrieval-Augmented Generation) with top-0 to top-3 is an excellent solution. Paying customers have noticeably received a greater diversity of images.
    • SD ControlNet Canny significantly improved the stability of image generation.
    • Choose Airtable as a collaborative database platform for its visual interface and ease of use for non-technical team members.
  • What Didn't Go Well
    • Using an Airtable gallery as a visualization tool for a database works fine with image counts below 10,000, with acceptable loading times. However, exceeding this quantity might become too much for the frontend to handle.
    • Under the SD2 model, the generation of images related to humans still performs poorly. It is not recommended to use prompts related to humans in this version.
  • What We've Learned
    • Fine-tuning the model is still resource-intensive for a limited-resource team. Modifying some algorithms to help improve output performance is more efficient than modifying/optimizing the model.
  • What Still Puzzles Us
    • The default guidance scale is 7.5. We tested a range from 7.5 to 20, finding 15 to be the most optimal, yet it still doesn't resolve the issue of generating unwanted artifacts.
Product & Customer

API-as-a-Product

PropmtAPI

Our earliest API sales date back to 2020, when we first deployed models to the cloud for other tech companies to use. Between 2020 and 2022, we launched multiple API versions, but none were positioned as a true API-as-a-Product business model. Funny story—our first API documentation and pricing table were literally built in Google Docs (yes, you read that right—tables in Google Docs!).

Selling APIs based on AI models is a whole different ballgame compared to traditional SaaS. When we pitched this model early on, we got plenty of skeptical looks: “Why build it this way? Do other startups (they meant unicorns) even do this?” Before the AI boom and those amazing AI companies started to sell APIs, this way to think of the product was hard to be accepted. But the demand came directly from our customers, so back in 2022, we believed that it was the future.

We iterated through three generations of API versions, and it wasn't until v3 that we got closest to meeting our users' requirements. From v1 to v3, we faced many challenges, both technical and commercial. To DIPP, Rosetta AI, TG3D, Giftpack, GranDen, Wix, and others—you weren’t just early users. You became our co-pilots. Your feedback was pure gold in shaping the product we’d always dreamed of.

Background and Objectives

We started this project in response to demand from our B2B clients, particularly platform customers, who wanted to integrate AI models into their own front-end and back-end applications. Instead of providing local model licenses, clients requested API access for seamless integration. Our goal was to deliver a simple, easily testable, and secure API. This API utilizes a token exchange mechanism for enhanced security, allowing users to quickly and securely access image generation and other AI-powered solutions. Clear documentation and straightforward testing enable users to make requests using API tokens after receiving backend permissions.

Significance and Reviews

  • Deployed 5 image generation model APIs.
  • Researched and tested 17+ frontier AI models to refine the API’s output, focusing on establishing robust content policies and guidelines that ensure the visual outputs align with AI safety standards.
  • Successfully developed 11 API endpoints across various functionalities, adhering to the PRD’s specifications to broaden the API’s scope and utility.
  • Achieved seamless onboarding and testing within 10 minutes, as each API can be tested directly on our API documentation platform without needing local testing, ensuring a rapid and smooth onboarding experience.
  • What Went Well:
    • The addition of a prompt parameter, leveraging GPT technology, enabled more nuanced and tailored image outputs based on user-defined scenes. This significantly improved generation capabilities compared to API-v2.
  • What Didn't Go Well
    • Cost of GPU Servers for Upscale API: The upscale API, designed to enhance image dimensions (2x or 4x), was resource-intensive and required dedicated GPU servers, substantially increasing operational costs.
  • What We've Learned
    • Resource Management:
      • Dedicated GPU: We allocated a dedicated GPU server to meet the computational demands of upscaling.
      • On-Demand Availability: Due to the competitive landscape of upscaling tools and associated costs, we made the Upscaler API available on-demand to clients who specifically requested it. This approach helped manage resource utilization and costs.
    • Value Proposition: We focused on providing a seamless and integrated experience within the overall API suite, offering convenience and reliability to users who prefer not to use external tools.
    • Parameter Design and Version Compatibility: Designing API parameters required precision to maintain consistency and avoid frequent changes. Any parameter alteration could necessitate substantial backend changes, impacting compatibility and stability, posing risks, and requiring careful planning.
  • What Still Puzzles Us
    • Cost Efficiency and Competitive Edge: Developing APIs like background removal and upscaling is driven by market demand and strong competition. While these services are essential for a comprehensive offering in a mature and competitive market, achieving cost efficiency while remaining competitive is challenging. Specifically, the upscaler API's need for dedicated GPU servers was not cost-effective in a market where similar services are often free or very cheap.
Product & Customer

Go-to-Market strategy

We’ve always known that educating the market takes time and money—no wonder investors kept asking, “Why is NOW the perfect time for this product?” Back then, generative AI tech was still evolving, and users needed to understand its value.

The most popular feature of our product was AI background removal, even though the most well-known service at the time was Remove.bg (which was later acquired by Canva). The benefit of background removal is essentially cost reduction. Internally, we categorized these as "Amazon-style" product photos. But internally, we saw it as a short-lived win, easily replicated.

For us, AI-generated marketing photos with context, that could spark imagination for a brand's customers, held much greater potential for increasing purchase rates (and there were already many e-commerce reports backing this up). Imagine a future where different customer profiles would see different product images tailored to their preferences and interests—all with minimal effort. That was the vision we were working toward.

At that time, we were experimenting with several go-to-market (GTM) strategies:

  1. Virtual Summits: During the pandemic, interest in Shopify and other e-commerce platforms reached an all-time high. We participated in an online summit attended by nearly all the major global platforms. The ticket price to have a booth was at least USD 2500 per seat. We participated, but it was a disappointment. The event likely only benefited the large brands giving speeches. Whether we passively waited for brands to visit our booth or proactively sent messages, the results were minimal.
  2. Cold Email Outreach: We found a database containing SMBs to large brands worldwide, paid for it, and downloaded 2 million records, including their categories, websites, emails, etc. After filtering for specific product categories, we selected 200,000 stores/brands. We then used a mass cold emailing approach. While this method was low-cost, the results still weren't as good as we'd hoped.
  3. Launch Shopify & Shopee App: We decided to developed a third-party app on Shopify and Shopee. This approach, at the very least, gave our logo visibility. It also allowed us to leverage the brand recognition of Shopify and Shopee in our marketing efforts. Of course, the trade-off was that we had to invest in additional development to meet their platform requirements.

These moves laid the foundation for us to sell our API, and also almost led to our acquisition by the number one marketing automation SaaS company on Shopify (located in Silicon Valley).

Research & Development

Ethical data sourcing

We're all aware of the notorious reputation AI companies have for web scraping, not to mention that building early image generation models required similar data collection methods for "model training." Before we even began training generative models (Stable Diffusion 1.5 was released in October 2022), we had already built an automated image collection mechanism for feature learning.

Collecting publicly available images isn't difficult for a team with skilled data engineering and deep learning engineers. The real challenge lies in properly documenting data sources and navigating intellectual property concerns. During development, I(Hannie) would often joke with the team: “Hey! The golden rule of this project is simple: don’t let me get sued and hauled off to jail, or the company will lose its captain!” (The team would laugh every time—little did they know I was dead serious.)

Even today, how to ethically collect images remains a contentious debate. But we remain optimistic that a balanced solution will emerge in the future—one that respects both innovation and creators’ rights.

Background and Objectives

To gather a substantial volume of product imagery for e-commerce, marketing, and advertising, we established an automated web scraping system paired with a semi-automated filtering model. This model selectively filtered out undesirable images such as selfies, explicit content, and cartoons to meet specific content policies, using pre-trained models enhanced with tailored conditions for image selection. Additionally, we documented the data sources to comply with platform-specific usage policies and ensure preparedness in risk management.

Significance and Reviews

  • Collected 1.1 million images from 6+ target platforms using web scraping, ensuring policy-compliant data collection.
  • Implemented an automated filtering system that achieved an 81.56% success rate in preparing data for AI model training, employing classification, object detection, and SVM algorithms for efficient data filtering and model training.
  • Developed a data source tracking pipeline to monitor data status and updates, facilitating efficient data management, risk management, and taxonomy development for data collection system.
Product & Customer

The 2nd Pivot

RetailerSuite

Before ChatGPT kicked off the Generative AI wave in 2022, our team was focused on using GANs (Generative Adversarial Networks) for product image generation in marketing and advertising. Compared to today's standards, the models and algorithms back then were quite immature. Although our team was capable of training these models, it was incredibly challenging to balance revenue generation with research and development.

The pandemic unexpectedly became our focus booster. We pivoted hard toward a single product roadmap, shifting from chasing random contracts to doubling down on model development. But let’s be real—going from R&D to deployment and actual product delivery was a marathon, not a sprint. Many times, we spent two weeks to a month training a model—burning through massive GPU costs—only to find that the results weren’t as good as we had hoped (and were nowhere near ready for customers).

Still, we truly valued this period of focused R&D. It gave our team a strong sense of unity and purpose, allowing us to refine our vision and move forward together.

Background and Objectives

When COVID hit in 2020, we realized our edge computing-based solutions needed an urgent reboot. Luckily, we leaned into an existing contract where we’d used AI to help retailers digitize storefronts—and rebuilt it into what became Retailer Suite.

During this process, we discovered a costly and repetitive issue—creating high-quality, professional product photos. This got us thinking: What if we could extend our CV model to generate product images instead? We didn’t stop there. Our vision for the platform was bigger—we wanted it to provide intelligent brand insights, going beyond just AI-generated visuals.

Significance and Reviews

Our user growth rate hit 600% monthly growth. We've landed some big names as clients, including the largest outdoor and mountaineering brand, a major US wholesale brand, and a high-end Japanese department store. For these companies, we've used AI to achieve an 80% reduction in time spent on certain tasks and a 17% increase in total revenue during promotional periods.

  • What Went Well:
    • Users save 80% of their time with AI, helping them remove backgrounds automatically.
  • What Didn't Go Well
    • Integrating interactive segmentation models led to issues with the frontend's stability, causing crashes or long response times.
    • For brands requiring higher-quality images, our response time for the generation process was slow, and the system couldn't handle the large volume of high-resolution product photos users were inputting. We were facing a choice: invest in higher-performance servers or optimize our model.
  • What We've Learned
    • Images can be processed in the front-end or back-end. Finding the balance when running visual AI models and simple photo processing simultaneously is an art.
  • What Still Puzzles Us
    • Enterprise clients needed batch upload capabilities and didn't want editing features, while SMBs valued editing functions over batch processing. We had both SMBs and enterprise clients, so how should we prioritize features?
Product & Customer

Opportunity or Distraction?

In 2019, we had the chance to represent the Taiwanese government at Innovfest in Singapore and TechCrunch in San Francisco. When we showcased our AI solution there, opportunities seemed to pop up everywhere.

In Singapore, we had a potential deal to sell our product to a high-end luxury resort in Southeast Asia. Around the same time, we were also working on a side project that used the same detection model for construction companies. This unexpectedly opened another door for us, especially when we demoed it in San Francisco. An angel investor, who was from one of Asia's largest construction firms with numerous resorts, shopping malls, and construction projects, expressed interest in investing in us.

Back in Taiwan, we landed another business contract to use our detection model to help a Taipei business district with its digital transformation. For us back then, the core tech stayed the same—it was just the applications that shifted. Since we hadn’t secured investment yet, we took on all three opportunities—not just to make money but also to gain traction.

This was when our team started generating positive cash flow, which allowed us to hire more engineers and expand our business development efforts. Looking back, sure, it’s easy to ask, “Why didn’t we just focus on one thing?” But when someone is offering you a major investment—or even better, paying you upfront for your service (note: not even your product—just the service), you can’t help thinking: "Could this be our big break?"

This scattered hustle lasted until 2020, when COVID hit. That changed everything.

Product & Customer

The 1st Pivot

We all know that pivoting is sometimes an inevitable part of the startup journey. For us, our pivot wasn’t about abandoning what we had—it was more like an extension of our technology, shifting its application (and, of course, our target customers and business model).

Since our last major contract involved AI-powered virtual try-on tech with object detection, we had the chance to collaborate with nearly every company building edge computing solutions in Taiwan. Taiwan is famous for being a hardware powerhouse. These incredible companies are really masters at building the hardware to power AI applications. And that’s how we started diving into offline scenarios.

Background and Objectives

In collaboration with hardware partners, we developed an AI-powered recommendation system for digital signage in shopping malls, utilizing edge computing devices with cameras. The primary objective was to enhance customer engagement and brand visibility within shopping malls. By leveraging edge computing and computer vision, we aimed to create a personalized shopping experience. The system would detect customer demographics (age, gender) and recommend relevant brand information on digital signage, tailoring content to individual preferences. The goal was to increase customer dwell time and drive sales for the brands featured on the signage.

Significance and Reviews

We delivered a 10x increase in customer dwell time at digital signage locations for our shopping mall clients.

  • What Went Well:
    • The AI-powered recommendation system significantly increased customer engagement, with customers spending 10 times longer interacting with the digital signage compared to the non-AI solution.
  • What Didn't Go Well
    • Difficulty in hardware integration and sales due to dependency on hardware companies.
    • Hardware companies were reluctant to distribute single units of digital signage or edge computing devices, complicating the deployment process.
    • Computational limitations of local edge devices prevented some models from running effectively.
    • Decision between edge deployment (localized processing) and cloud computing (increased response times) posed significant challenges.
  • What We've Learned
    • Collaborating with hardware companies as a software entity is complex and often slow-moving.
    • Hardware limitations can significantly impact the effectiveness of AI models deployed at the edge.
  • What Still Puzzles Us
    • How to track the user journey offline effectively.
Product & Customer

Try to find PMF from B2B

As a smart assistant designed to recommend products within a conversation, our approach mirrored a platform business model. We aimed to recommend product links to users based on their detected style, and charging fashion brands a fee.

We spoke with the largest fashion brand and tried to sell them our solution. Then, quickly learned that B2B (business-to-business) sales differ significantly from B2C. Delivering a comprehensive solution that provides long-term value to the enterprise is crucial. We proposed integrating their product database (a form of RAG - Retrieval-Augmented Generation) and embedding the AI assistant directly into their website. This approach could help their customers select fashion items more effectively and improve their purchasing experience.

Ultimately, the project evolved into a virtual try-on solution, rather than the originally proposed fashion assistant. From an enterprise perspective, it's indeed challenging to share customer data for enhanced recommendations unless we offer local model deployment. Furthermore, LLMs were not as mature now in 2018, and adopting an AI assistant was considered risky because their IT departments would need to address the challenges AI introduces.

Despite this shift, we remain incredibly grateful to Jack, the director of OB Design, for giving us the opportunity to propose and develop this AI solution.

Product & Customer

Try to find PMF from B2C

When we launched our smart fashion chatbot, our initial focus was on introducing the tool to the consumer market. Fortunately, we had the opportunity to promote it through the largest human resources platform which was exploring AI-driven behavioral analysis during interviews. We focused on new graduates in the financial, IT, and software industries. Candidates simply submitted their basic information and could immediately begin recording their interview video. Our AI models detected factors like professional attire, behavior, and emotional expressions. Based on these factors, the chatbot provided an overall interview rating.

Within a week, we reached 11,000 interactions. However, the business model remained unclear. Back in 2018(or even now), it wasn't easy to generate revenue from B2C AI solutions. Additionally, running computer vision models on GPU servers to maintain acceptable response times incurred significant costs. Our team struggled to find a viable way to monetize our AI assistant.

Product & Customer

Our first product

Ussitant

We all know that finding Product-Market Fit (PMF) is the top priority for a startup. When we launched our first product, we only had a single landing page to showcase it. Based on our experience, there's no need to spend time designing a logo or a complex website initially. Just launch the prototype and iterate based on user feedback as quickly as possible.

Background and Objectives

We launched a fashion recommendation chatbot using Facebook Messenger and Chatfuel, aiming to enhance user engagement through personalized styling advice. Our goal was to create an interactive chatbot that could recommend personalized fashion styles by analyzing user-uploaded photos and preferences. Utilizing data from influencer photos and the DeepFashion dataset, we aimed to provide accurate style suggestions.

Significance and Reviews

Achieved the highest week-over-week growth rate of 1200%.

  • What Went Well:
    • Successfully collected a large dataset of influencer photos for fashion recommendations.
  • What Didn't Go Well
    • LLMs in 2018 was not mature enough to provide a seamless conversational experience.
    • Users often used the chatbot for general conversation rather than its intended fashion recommendation purpose.
    • The business model was unclear, especially regarding whether to charge users or merchants and how to monetize effectively.
  • What We've Learned
    • Using a messenger chatbot was an efficient way to quickly launch an MVP and replace the need for a full-fledged app.
    • Ensuring data privacy and access control is crucial, especially in conversational AI applications.
    • User interaction data revealed that many users share sensitive information, highlighting the need for robust privacy measures.
  • What Still Puzzles Us
    • How to balance the two-way platform dynamic: Should content providers or users be prioritized initially?
    • Finding the optimal strategy for monetizing the chatbot service while ensuring a positive user experience.
Research & Development

Before annotation era

Background and Objectives

To overcome the challenge of immature fashion item detection models for our AI fashion chatbot, we developed LabelFun, an in-house annotation platform for Mac and Windows. Our goal was to create an efficient, remote annotation platform that supports both Mac and Windows, attracting a large pool of applicants due to its convenience and flexibility. By providing clear guidelines and training, we aimed to ensure high-quality data collection and preparation, crucial for AI model training and reinforcement learning.

Significance and Reviews

  • Successfully managed over 500 applicants for the annotation project.
  • Led a dedicated team of more than 20 annotators who successfully labeled over 100,000 images with complex fashion labels, supporting our AGI preparedness by refining our model’s ability to understand and process fashion-related imagery.
  • Developed comprehensive labeling guidelines that not only ensure data consistency but also uphold stringent privacy standards and ethical considerations, reflecting our commitment to product and user policies.
  • What Went Well:
    • Efficiently labeled over 100,000 data points, aiding in data collection and preparation.
    • Developed a user-friendly annotation platform compatible with both Mac and Windows.
  • What Didn't Go Well
    • Challenges in creating clear and precise guidelines for annotators.
    • Difficulty in automating the quality control process for annotations without a standard answer.
    • Encountered issues with the accuracy and consistency of annotations when guidelines were not followed precisely.
  • What We've Learned
    • Clear and detailed guidelines are crucial for consistent and accurate data annotation.
    • Structured screening processes help in selecting skilled annotators.
    • Remote annotation jobs are highly attractive and can draw a large pool of applicants.
  • What Still Puzzles Us
    • How to automate the quality control process effectively for annotations with no standard answer.
    • Exploring scalable solutions for managing and verifying large-scale annotation tasks in AI projects.
Backstage & Beyond

Start of the Journey

The journey of ShowHue began when Hannie, during her second year of master's studies, received an email in her university inbox about a startup opportunity in Silicon Valley. At the time, NTUST offered a grant of NTD 200,000 (approximately USD 6,000), which required a pitch, and the Innovation Incubation Center selected two teams.

After returning from the Bay Area, ShowHue successfully secured further funding (US$20,000) from the Taiwanese government, encouraging the official establishment of the company. Initially, like many startups emerging from research labs, the team focused on computer vision research. They had little experience on commercialization or recruiting team members.

This changed in 2018, Hannie had graduated, and was joined by talented engineers Ire and Jason. This marked the team began to dedicate to the startup venture.