IBM Visual Insights
In 2018, businesses were reaching the second stage of the journey through AI transformation. For the first time, non-technical users needed to work on the same image/video processing platform as AI Ops and Engineering teams to collaborate on more powerful, data-driven models. IVI was created to meet this need.
I joined the IVI team just a few months after the product’s initial release. As Design Researcher and then Design Research Lead on IVI, I championed the product's development from a user-based research perspective beginning with release v1.
The bulk of research conducted on IVI was qualitative interviews with users and business partners. I spent approximately a third of each interview asking generative questions about their model training pipeline, engineering team, pain points, and prospective needs. Synthesis of this feedback was conducted quarterly to prioritize future feature improvements and areas of focus for design and development.
Most of time spent in user interviews was focused on key research items, which might be explored through user click-throughs, card sorts, and A/B testing. We created personas for sponsor user groups and customized interviews accordingly. All interviews were recorded and annotated for later analysis by the design team. While I was the sole host of the interviews, I preferred to have the prototype’s UX designers or developers sitting in to listen and jump in with questions as needed.
The design team worked closely with the product team, marketing, and executive stakeholders at semi-annual workshops to create the product’s roadmap. Workshops were facilitated by a member of the design team. Participants were product team members and executives from diverse geos, all flown into Austin, TX for the 1-2 day session.
Our team worked in an agile environment with frequent standups. A feature may go through 3-5 iterations in the design research phase before handoff. As Research Lead, I often distributed surveys and gathered quantitative data through external resources.
I also conducted in-depth consultations with SMEs-- for example, speaking with a neurosurgeon about the applications of AI in 3D brain imaging—and internal design reviews.
Case Study: Freehand Labelling
One example of a key research item was freehand labelling, which was requested by dozens of high-spending, enterprise users and a selling point for our competitors. I conducted an initial design sizing, competitive analysis, and exploratory research to begin work on this key feature add.
Feedback from current users indicated that there was a need for freehand labelling of image data in addition to box shapes, but that users had little to no experience with photo editing tasks. I created a lo-fi interactive prototype using Adobe Photoshop and a demo from MIT, both of which had freehand labelling systems in place and allowed us to bypass expending resources into a coded prototype.
IVI is a frontrunner solution for deep learning AI among non-technical and business users. The software centers on an intuitive toolset for labelling, training, and deploying AI vision models for object recognition, image classification, and action detection. IVI stands apart for its seamless integration of industry-adopted frameworks and the flexibility to incorporate custom assets from within an accessible user interface-- no engineering expertise required.
Recently, IVI was integrated with IBM Maximo Application Suite, where it has been deployed for continuous learning primarily on manufacturing production pipelines. My research on this use case has propelled the product to use by clients such as Apple, Novate Systems, the American Red Cross, and more.