How Has AI Impacted Service Design?
Professors Susanna Zlotnikov and Dr. Arthur Sugden explore AI’s impact on service design through the lens of co-creation.
By Jess Ignasky
AI has become ubiquitous in our daily lives. However, as products and services continue to adopt the technology into their offerings, what is the impact on consumers?
Professors Susanna Zlotnikov and Dr. Arthur Sugden recently explored these effects in a fireside chat as part of Professor Zlotnikov’s Introduction to Service Design course in the CMUiii Online program.
The discussion explored AI’s impact on service design through the lens of co-creation, a concept coined by Professors C.K. Prahalad and Venkat Ramaswami in 2004.
Co-creation is defined as the way value is derived through involvement and collaboration between an enterprise and a consumer.
An important aspect of co-creation is the DART Model, the building blocks between an enterprise and a consumer.
What is the DART Model?
Dialogue: communication and knowledge sharing between customers and enterprises
Access: the ability of customers to access information, tools, and resources to participate in the co-creation process
Risk Assessment: acknowledgement that both customers and enterprises face risks during the co-creation process
Transparency: customer access to information about the co-creation process, including costs, prices, and other factors
Using the DART Model, Professors Zlotnikov and Dr. Sugden explored its applications to AI-led products & services, the risks of utilizing AI in the co-creation process, and predictions for the future of service design.
Below is a collection of excerpts from their conversation.
The DART Model’s Relation to AI-Led Products & Services
Susanna Zlotnikov: How can we use the DART model to understand and critique how people interact with AI, specifically large language models (LLMs) like OpenAI, Deep Seek, Llama, and Anthropic?
Dr. Arthur Sugden: The idea of co-creation is key to what LLMs have become. They’re built off of our dialogue, they keep improving because of our dialogue. This is similar from an access perspective as well. We have watched these LLMs evolve, and we can see the outputs that they give us. To some degree, they’re following the DART Model really well.
Where it starts to become interesting is with risk and transparency. Starting from transparency…they’re not transparent. I don’t even think they’re transparent to themselves.
With low transparency also comes risk. There are some profound risk factors every time we use an LLM. Unless you’ve checked the right checkbox and you’re paying extra money to use it, they’re learning from you.
Right now, there’s so much value put on data that nobody else has access to [except for these companies]. All of these companies are trying to be data siphons and then resell it, and so, the less transparent, the better. They benefit to some degree from not letting us know what the risks might be, and also from being minimally transparent.
Susanna Zlotnikov: How does the DART Model translate to your startup?
Dr. Arthur Sugden: We try to follow all 4 parts of the DART Model.
Dialogue: People use our collection of biomedical data. When they search for it, our company learns from their search in multiple ways, whether that be what they wanted to find or, if they’re skipping the first results and choosing a later one, a better way to present the data.
Access: We provide access to the underlying data to consumers. We make predictions of what consumers want to find and help them fine-tune it. As the company is still improving, our algorithms giving the consumer access is actually helping them.
Risk Assessment: We focus substantially on security, making sure any improvements that are trained for one customer can’t leak to someone else.
Transparency: We show how our algorithm works to consumers. This allows both access and transparency and helps us to improve.
Susanna Zlotnikov: How has companies’ use of the DART Model changed over time?
Dr. Arthur Sugden: Google is a good example of this - from 2000-2010, they said that their goal was to get you off the Google homepage as quickly as possible, and that was their only target, which meant they wanted your top result to be at the top.
What’s happened in the last 10 years is two things. First, they’ve introduced substantially more personalization so the results are more of what you want, but perhaps less of what you need. The best example of this is silos. If you have a political viewpoint, Google will not show you anything that disagrees with your political viewpoint. That introduces risk.
Second, related to transparency and access, Google no longer wants you to leave quickly. The best example of this is the AI summaries that now appear at the top of the page. The search results are now below the fold. They want you to stick around.
The Risks of AI-Led Products and Services Related to Co-Creation
Susanna Zlotnikov: How does co-creation change when AI is directly interacting with the consumer vs. being more behind the scenes?
Dr. Arthur Sugden: Until quite recently, AI was better hidden, and people became more comfortable with it. An example would be Netflix predictions or Spotify suggested songs. Maybe you didn’t think about it as AI, but it was starting to become more acceptable.
The risks were lower because we weren’t interacting with it as much as consumers. When it becomes something you interact with, it demands much more of your attention and much more depth. That’s when the risk assessment changes and transparency changes.
Susanna Zlotnikov: How do we think critically about introducing AI technology into any part of the service that we deliver to a consumer?
Dr. Arthur Sugden: I think the DART framework is an excellent place to start. Those are great guiding principles. Carving out your innovation a little bit away from the existing AI technologies also makes a lot of sense. You can use someone else’s LLM, but you’re not going to be able to compete just with that - you have to add something to it.
For example, in my company, we use AI of an entirely different type than the more popular LLMs. We’ve found that being in an early-stage startup space, having transparency, and low risk have been really good ways to push our product forward. We can improve upon the downsides of the big companies and sell that.
Susanna Zlotnikov: What are some biases you’ve noticed (with respect to English & non-English languages) in AI that we can be aware of when using these tools?
Dr. Arthur Sugden: LLMs are giving the average of whatever their training data is, and that training data started in English for a lot of these companies. It also started from people who are the most online. We’re running into an even bigger problem, which is, as OpenAI said already 6-12 months ago, that they’ve already run out of data. So, other than your interactions with the models, they’re going to need to start generating their own data to use, which means they’re going to take the biases that they already have and continue reinforcing them instead of pushing back against them.
We’re creating societal implications that are homogenous, which is contradictory to what we should value. We need to find ways of making it so that when siloing happens, we can push back against it. The strongest thing we can do if we are concerned about that risk is not to use LLMs in a way that would push us toward homogeneity.
Susanna Zlotnikov: How do you recommend we build risk assessment and critical thinking into using AI tools while still benefiting from what they can do for us?
Dr. Arthur Sugden: My recommendation is not to trust just one provider and try to use a diversity of models. Sometimes, I’ll use Claude from Anthropic, sometimes I’ll use ChatGPT from OpenAI. By not being a loyal customer to these companies, you are putting market pressure on the value of diversity.
I also don’t give my personal information to these companies, even if I pay for them. With OpenAI, even though I pay for it, I change the names every time before I paste something into ChatGPT. I won’t use anyone’s real name, and I won’t use any company’s real name. I try to limit information leakage about things that are personal.
Predictions for the Future
Susanna Zlotnikov: What are your thoughts on this current push of AI and personalization in industry? How do you see that changing in the next few years?
Dr. Arthur Sugden: It’s interesting to see the market factors for early-stage companies. One example is that your valuation is 10 to 20 times your revenue if you are software as a service (SaaS), and you can only achieve that valuation if you’re collecting people’s data.
For a company to have that high valuation, it is your goal to collect data. That also acts as a moat. When you have unique data, no one else can enter the market, and that is highly prized by venture capital firms. So, in a way, venture capital firms have pushed us toward these data collection tools, and the only justification a company has for data collection is personalization, so we’re all being pushed toward maximum personalization and against transparency.
In the next few years (and I’m giving away my bias as a consumer here), I’d love for companies to move back towards the DART Model. From a business standpoint, we want to have a business that is difficult to compete with, but I think that we’ve also seen an exhaustion from SaaS and are starting to be a little bit troubled by the risk of Chat GPT.
Given that the market is in a changing period right now, combined with the exhaustion we’re feeling, it suggests that there’s a good chance in the near future we’ll be focused on more open companies and better transparency.