Welcome to the sixth article in my ‘Leadership in Marketing Cloud‘ series. I created this series to address a major gap in reference to and documentation around leadership inside the specific context of Marketing Cloud. When I searched through the community and official places, there was nothing really available!

Please reference here for all the articles currently available for this series.

In the last article, we went over What is AI, what it can do and should we use it. As this is a significantly large topic and leads to a large article, I wanted to split it into two to make it less daunting to read. For this continuation of the article, we will address the Impacts and Repercussions of using AI followed by a general overview of what we, as leaders should do about AI. If you have not read the previous article, I would highly recommend going back and reading that one first.

As a reminder, the four major factors I recommended to focus on are:

  • Cost and ROI
  • Security, Legality and Risk Mitigation
  • Data and AI enablement
  • QA and Review

I bring these back up as they will be playing a major part in this article and used as a core of focus for below. Now on to the impacts and repercussions of using AI!

Impacts/Repercussions of using AI

AI can have amazing results and amazing impacts to your Marketing teams and plans, but those impacts also can be negative. Considering that this is not a mature and well traversed path, there is a lot of unknown or unregulated aspects that can lead to pitfalls and problems. To follow with the 4 major factors stated in the last section, I am going to use each one below to describe the potential impacts and potential repercussions that can come from them and how as a leader we should consider and plan around them.

Cost and Return on Investment

AI is expensive. Custom AI is definitely prohibitively expensive for the majority of small to mid-sized businesses. The training and upkeep costs on the language model (such as GPT-3) can cost over $4 million to $40 million dollars. (CNBC) There are estimates out there that place generative AI data center server infrastructure plus operating costs will exceed $76 billion by 2028 (Tirias Research/Forbes)

Now that is for building an AI, not utilizing an existing 3rd party service AI or a ‘simple’ AI (singular in focus, e.g. chatbot) – so you can take a breath again and relax…but not too much! Because even if you were to use third-party or a ‘simple’ AI, it is still not cheap. Even a fairly run of the mill, pre-built chatbot can run you $40k per year and that is just the baseline. Anything beyond basic can exponentially increase costs. If you were to look at trying to implement things on a budget, you could get subscriptions to things like ChatGPT or DALL-E2, etc. But those will be public tools that you are using – which may open up security and integrity risks as well as it does not allow you to customize anything at all on the AI. It all goes into a complete black box from prompt until delivery.

For any AI that you want to implement, you will need to review if it is something that you need to build and maintain, if it’s a customizable service, or if a simpler, pre-built or third-party software is best return on your budget.

Security, Legality and Risk Mitigation

By having robust security on your data, PII, proprietary information, etc. you can ensure that what you use the AI for is not going to have ‘hidden costs’ to it. Not all AI is protected in the sense that someone cannot go back through the logs to find what was input/output from it and use that in their own ways – potentially sharing confidential information with competitors, etc. This as well as many other security breach possibilities, such as gaining access to tools or databases via information accidently shared via AI prompts or more.  As a leader this is a huge consideration as without these protections, you and the company are extremely vulnerable.

By having a strong Security team ensuring all is safe and by having a strong Risk Mitigation team to help plan and prepare for future security needs, you can use AI without worry that you can potentially be undermining all the good you are trying to do. But of course, this can add to costs of not just money but also time and resources. It also adds another layer or twenty to your operations along with some further limitations. Finding the right balance between security and agility is a decision that takes a long time to find the answer to.

The second factor to all of this is for those who work with other businesses as clients (e.g. Agencies or Consultancies). There is yet even another layer of data protection and security on top of your own corporate information and data. You also have a responsibility to your clients to ensure all that they are providing you is as safe and secure as possible, so there needs to be additional sweeps and processes in place to make sure there are no leaks or other holes that can lead to major issues.

So yes, security policies and restrictions can be annoying and seem like overkill, but the level of confidence and comfort that you get from having those in place to protect you and the company are well worth the annoyance. But at what point is it too much, whether in cost or in limitations? That is where you come in as a leader to find the right balance.

Data and AI enablement

Without the data and information that is needed to provide good insights or produce relevant content, an AI is not going to return much that is useful. Much like the old saying “garbage in, garbage out” – without proper input, what is output is going to suck.

AI is not magic. It takes a lot of hard work and dedication not just to create it, but also to enable and maintain it. Without the proper baseline or feeds, it can quickly become obsolete, corrupted or confused (much like many people I know…. yes, I am looking at you Todd!). So, while the output looks glamourous and so simple and easy, it ignores the hard work that is required in the background to keep it running.

Think of it like you would a play. The actors on stage make it look so easy. I mean heck they are just walking around and talking. On the majority we are all capable of doing that. But what you do not see are the hours and hours of practice, memorization, alterations and edits and so on that go into that 2-hour performance you see. And that does not even include the backgrounds and props, the lighting, the production and financing, the script writers and directors and so on. So much goes into that small thing, but we never really see any of it, just the final result.

For example, to get the most out of Salesforce MarketingGPT, you need to connect it with Data Cloud. This acts as the core and base for MarketingGPT to do its thing. Without this, it’s like having a top of the line engine, but no frame to put it in. It won’t do you a heck of a lot of good other than take up space and look pretty. The second part of it though is that if you do have a frame to put it in, but the frame is too weak to hold the engine, you will at some point, if not immediately have it all come crashing down, potentially causing catastrophic damage to one part or another. That all being said, once you get everything you need to empower your AI, what it produces is almost magical. It can make such a significant change in some many things that on the majority the effort and costs are worth it.

Of course, though each of those supporting roles or requirements adds costs, time, effort, operational changes and additional resource needs. I know a lot of this ties back to money in some way, and honestly doesn’t everything? But there is more to it than just money. You need to ensure with the budget you have that you can get each of these pieces to a level that it will actually be able to be implemented and run successfully or it just will become a money pit.

QA and Review

This one is one that should be obvious, but actually is a part that many people overlook. AI is powerful and is skilled, but it is not omniscient. It works within a framework created by the algorithms, data library of learnings and prompts. There are times when these can lead to off the rails returns that are either inaccurate or potentially malicious or improper. This is why for everything produced, you need at least another pair of eyes to review it prior to implementation – just like you do with human generated content.

This is a major resource draw in order to account for this and can reduce the scalability of the AI. By requiring human review, we then reduce the output capacity and bottleneck performance based on the speed and capability of the reviewers. This is why many are tempted to lighten restrictions here and only review certain things or generalize it. As a leader, we do need to find the right balance, but I heavily emphasize to make sure to do due diligence as a mistake can lead to catastrophic consequences.

Imagine working for a company that makes games. You are tasked to create a new game and then release it into market. For inspiration, you use your generative AI tool to provide you with a simple mechanic and structure of a game. What it provides sounds amazing so you build it up based exactly on the response and do not bother to check it or research it. All your executives love it and it moves forward with release. Then upon announcement you get hit with a copyright infringement case as that game already exists and has for a while.

Now not only have you lost all that work, time and money in building it – but you also have now got the company in legal trouble AND have sullied the good name of the company as a place that is ok with stealing others’ ideas and releasing them as their own. Not looking good for you.  If instead, upon receiving the response, you did some research into existing games or took some time to make changes or review the results, you could have prevented the whole thing and instead used it as the inspiration you originally intended instead.

AI is not, and likely will never be, infallible and 100% accurate. This is because humanity is fallible and so anything we build or create is fallible as well. This needs to be a major consideration when working with AI or you can be like the poor soul from above who is likely to not only be fired, but potentially sued as well and unlikely to ever find a job in his career again.

So, as a leader, what do I do?

Well, you need to do a ton of research and analysis (if only we could use AI to get all this upfront research done for us…) on AI and what your company would need in order to optimize implementation into your companies plans and toolset.  You also would need to look heavily on budget. AI is EXPENSIVE. Like REALLY expensive.  This can be a go/no go factor for many companies that are not ready to put that kind of upfront cost down on something that is still not fully matured and fully proven to be effective yet.

And also, you need to consider how it will change the job framework and organizational model that you are using now. If you have a team of 4 or 5 copywriters, when you implement the AI will they all still be needed? Can you reduce it down to 2 with 1 writing copy for the major projects and such that you want done outside AI and another to review/edit all the generative AI content that needs to be produced? What happens to the other 2?

Also, with the shift in needs, will you have enough technical people to help maintain and update the AI to ensure it is producing optimal content? Will you need to hire more people in different roles to correctly utilize this tool? Will this adjust the current overall operational costs and how will that affect your bottom line? Is your company ready to make these major shifts in focus, operations, processes and more?

As you can see above, your job as a leader is to ask a ton of questions and get those answers. A good portion of the questions though, honestly, do not have a single, solid answer. They usually are more an ‘it depends’ or ‘maybe, but also maybe not’. Which then leads to more questions or tangents you need to go down to correctly ensure it is all a positive and not a negative.

With all that being said, I wish there was a more solid direction I can provide, but there are so many factors involved in this decision that the answer could change not only from person to person, but from moment to moment. Hopefully the above helped give some insight and knowledge on how to view AI as a leader to help you prepare for your own investigation into if AI is right for your company.

My next article will, this time for real, be on Delegation and Enablement. I know it’s not as snazzy or eye-catching, but it is still a highly important and impactful aspect of leadership. Hope you enjoyed this and see you in the next one!

Tags: , , , , , , , , , , , , , ,
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments