BuzzTheory AI Policy

 

Position Statement, Philosophy & Approach to AI

Artificial Intelligence (AI) is a vital tool in the sales and marketing landscape. In the right hands, it can enhance revenue opportunities, speed content creation and drive customer and sales partner engagement. AI isn’t new to marketing. It’s been used for years to optimize spending, find SEO content gaps, check grammar and coach live representatives in real-time, for example. However, the release of ChatGPT and the widespread adoption of “generative AI” apps and features have brought it into the mainstream.

AI is fast-evolving and has become embedded in so many applications that most companies’ AI policies and disclosures, which are focused on dedicated AI software applications, are already dated or obsolete. Application-centric policies are inadequate for the breadth of AI’s sprawl. With AI assistance becoming a feature in everyday applications and devices, we must build meaningful AI policies around strategy, outcomes, processes, checks and balances, and other safeguards for the use of AI at large.

AI is a far-reaching technology with commensurate implications. But, for all its complexity, our guiding philosophy is simple: Leverage AI when it benefits our clients and avoid it when it doesn’t.

In the interest of transparency, we’ve publicly published our internal AI use policy, which follows.

BuzzTheory’s Internal AI Use Policy

This AI use policy guides our team in the transparent and ethical use of AI. AI use must always align with our values and prioritize human-led revenue, marketing, sales and communications initiatives. Customer rights must always be preserved with the use of any new technology, and AI is no exception.

Guidelines for Responsible AI Usage

Transparency

We’re committed to complete transparency about our use of AI. We use AI to speed processes and assist in our company’s content development, modification and atomization. We’ve developed AI standards and best practices that facilitate transparency, accountability, quality, and privacy.

We use AI to assist in human-led content ideation, creation, editing and formatting. We never fully automate content development with AI. Our processes ensure every content development initiative is led and reviewed by people who understand our audience, our customer’s audience and AI’s limitations.

AI Tool Sourcing & Usage

The speed at which AI tools are proliferating – both as standalone solutions and within established software applications and devices – makes naming every possible AI usage instance impractical. AI is now in our word processors, email applications, project management tools, spreadsheets, presentation software, imaging and video software, website builders, ad management platforms, analytics suites, and nearly all other business tools. And it’s now running directly on our communications devices.

Moreover, we don’t operate in a vacuum. We’ve encountered clients openly using AI tools for ideation, creation, editing and more, even as we pass drafts back and forth between our teams. It would be impossible to ensure that a work product is free of AI influence of contribution.

For all these reasons, it’s impractical to identify AI usage at the individual tool level. Instead, we rely on principles for AI tool use within our organization, in our internal work processes, and for client deliverables. Those principles are as follows:

  • Human-led. All projects, tasks and deliverables are human-led. In our organization, this means humans initiate and monitor the development process. Additionally, no deliverable, no matter how small, can be handed off to a client without human review.
  • Privacy and data security. Data protection and retention standards must always protect proprietary and sensitive personal information.
  • Ethical use. Like all other content, AI must not be used to mislead or manipulate humans or in any use case that could be deemed unethical. Additionally, AI may not be used to impersonate anyone without their permission. With permission and review, employees may use AI to help them mimic the styles of teammates or clients to draft or edit content on behalf of that individual. Additionally, it is our responsibility to ensure that all AI outputs are free from bias and are accessible.
  • Fact-checking. AI hallucinations are real. All data and sourcing must be human-checked, always.
  • Security protocols always apply. Since AI tools, like any software asset, can be targeted for a cyberattack, our security team and advisors must first clear the use of any new tool. Always use applications approved by our IT and/or security teams.

When Not to Use AI

Do not use tools not vetted by our IT and/or security teams. If you are interested in a new tool, you may request vetting by management and our IT/security teams.

Training on AI Usage

All employees are trained in AI best practices, including the technical aspects of using AI and the ethical and other considerations detailed in this policy. Individual tool tutorials are available from vendors, our IT teams and, in some cases, third-party trainers (live and/or through online courses).

Best Practices for Implementation

Always follow these steps:

  1. Understand the AI system or built-in feature you’re using, including its limitations.
  2. Managers must ensure that all employees and new hires have read this policy.
  3. Document use cases and standards whenever possible. Sharing experiences with your team members can foster a positive culture and skills flexibility.
  4. Build constant training into your routine by attending webinars, company-led tutorials and available courses.

Employee Acceptance

We aim to leverage AI technology to develop maximum benefit for our clients. Your agreement with this policy is implicit in your employment and use of AI. Non-compliance could lead to disciplinary action or employment termination.

FAQs

  • How are AI tools and features vetted?
    AI tool evaluation is complicated. All the factors influencing traditional software purchases apply – cost and budget implications, impact on productivity, revenue potential (client and/or BuzzTheory’s), whether it’s a better or cheaper mousetrap, integrations, originality vs. redundancy, etc. However, AI also brings new concerns about security, efficacy, quality, and accuracy. And it’s evolving faster than any product set in history. In our firm, the north star principle that guides our navigation of AI tools and their use cases is they must benefit clients.
  • What does “human-led” mean in practice?
    The term “human-led” means different things in different organizations and contexts. BuzzTheory’s core competency is planning and generating high-performance content in complex, fast-moving technology sectors that activates revenue generation across complex, fast-moving distribution channels. These are not cases wherein cheap or mediocre content is “good enough.” AI tools, by virtue of their probabilistic assembly, can produce work that’s average at best. AI can’t compete with content developed by our team. However, AI tools can speed or enhance research, production, formatting and atomization under guidance from an expert.
  • What about AI detection tools?
    There are three big problems with AI detectors:
    • As ChatGPT’s research team has stated, they don’t work. They flag both lousy and well-written content, which is ironic since mediocre is the best you can hope for from AI, thanks to its probabilistic nature. Best practices like using grammar checkers or writing snappy CTAs trigger them. The companies that offer detection suites load their service terms with disclaimers despite lofty front-page claims. When confronted with false positives, the companies advise you to try different versions of their detectors or advise writers and editors to explain to their employers that the tools are faulty.
    • They are pointless. The underlying premise with AI content detector relevance is that Google is looking for, and penalizing AI-generated content. This premise is false. Google has publicly stated this many times, and is one of the leading providers of content-generating AI models. It would make no sense for Google to penalize AI usage for its own sake. The correlation between AI-generated content and declining search rankings and traffic is a function of the helpfulness – specifically the lack of helpfulness – that accompanies most AI content (e.g., copycat blogs or web pages). Human-generated copycat blogs have also suffered in the HCUs. Generic content just doesn’t cut it, whether it’s produced by humans or AI.
    • AI is here to stay. Data clearly shows companies want to work with consultants and agencies that lean into AI and can help them understand where and how to use AI effectively. As a company deeply steeped in the emerging tech space, this is right in BuzzTheory’s wheelhouse. Our clients are gaining SEO while most companies suffer from HCUs, and we were the first agency to announce a proven generative engine optimization(GEO) methodology for placing our clients in AI chatbot and search engine results. Clients don’t just value our expertise in this area. They need it to compete.
  • Who do I contact with questions?
    Reach out to our IT team with software questions. Inquiries about this policy should be directed to your supervisor or, in that person’s absence, the managing partner.