Beyond the Hype: What the Future of AI Really Holds

An Interview with John Byron Hanby IV

Founder and CEO of Iternal Technologies, Inc.

By Ryan Anderson

What once seemed like Hollywood fantasy, has become reality to a certain degree. 

Movies like “2001: A Space Odyssey,” “Terminator” and “The Matrix” paint a bleak future where Artificial Intelligence (AI) created to make human life easier, expand well beyond their programming to the point where the machines are in control.

“Well, you listed two of my favorite movies of all time,” said John Byron Hanby IV, Founder and Chief Executive Officer of Iternal Technologies, Inc. Hanby is a featured panelist at the upcoming StratCommWorld 2023 conference.

“Don’t place all of your trust in something that you don’t understand. Make an effort to understand AI, and if every one of us can understand it and ask critical questions, that’s going to maximize the benefit AI has on society.”
– John Byron Hanby IV

“My background is in film production. So, growing up, I was sitting on the couch watching those very movies with my dad as a five-year-old,” Hanby said. “And the crazy thing about it, the most memorable scene is when the Terminator reaches out trying to grab you as it is being crushed. I thought it was real, and that excitement led me on the path to become a filmmaker, and later seek to tell stories in unique ways.”

His path includes blending the worlds of media generation with AI to create interfaces where content can be created and customized at a fraction of the time and financial costs. 

Hanby graduated from the University of Texas at Austin in three years as the top graduate of the film department at the Moody College of Communication. He has won more than 65 international film awards.

With Hanby’s film background, the topic of cinematic AI came naturally in a recent interview. 

“The cool thing about our technology and what we do is we use AI to automate the production of really any type of communications content, whether that’s sales and marketing materials for corporations, or citizen engagement messaging for governments, or even internal operational documents,” Hanby said. “All of that can be automated and assembled with the click of a couple buttons using our technology.”

Those button clicks can create content in various media formats ranging from PowerPoint slides, audio or video content, or even more complex items like graphic design brochures. 

“If you think about the annual reports and communications materials that large corporations and government agencies put out, or any of their internal strategic documents, those cost a lot of money and time to produce. Through our automation, we’re able to reduce the cost to one 10th of the current manual methods.”

There’s no question that AI interfaces like those Hanby describes, and many more, have made their way into our lives. Yet, creators of the technology and consumers alike wrestle with how to best harness its potential and avoid the pitfalls of these emerging and mostly unregulated technologies.

At the StratCommWorld conference being held May 1-2 at the National Press Club in Washington D.C., the topic of AI is front and center. Hanby is a panelist in the session Unlocking the Power of AI, which is aimed at providing attendees practical insights about the latest AI technologies, as well as tips and strategies for how to use AI to create content at scale.

Another aspect of AI that Hanby is exploring, involves creating custom personalized videos using a series of questions and prerecorded video clips that can be seamlessly edited together automatically. 

“Literally, if you can answer these questions and click the button, you can generate a video and it looks like it cost $10,000 or $15,000 to produce by some kind of marketing agency,” Hanby said. “Our AI democratizes the power of communicating, allowing anyone to click a button and generate a very impressive video, brochure, slide deck, or beyond.”

The technology can be especially beneficial for non-profit organizations and other groups that do not have thousands of dollars to spend on marketing campaigns. Of course, even large corporations can benefit from cost savings through AI. Afterall, doing more with less is a key mandate for companies, organizations, and agencies of all sizes.

While Iternal Technologies, Inc. is using AI to great benefit by providing cost and time savings for their customers, according to Hanby merely using AI does not mean that it is AI well used.

National Press Club


May 1-2, 2023, National Press Club, Washington, D.C.

Complete agenda and registration available at

“A lot of people say that they can do cool things, but what is the value? You can make something that’s theoretically awesome, but it doesn’t necessarily translate into some positive value for either a corporation or a government or the public in general,” Hanby said.

As is often the case with any new technology, an assessment of the pros and cons of AI is needed in order to ensure that the risks do not outweigh the rewards. Part of that assessment centers on the need to be educated about what AI can and cannot do, as well as what it should and should not do.

“I break it down into kind of two categories. There’s the great positive opportunity, and then there are also the things that we need to be very thoughtful about in terms of ethics, how we utilize AI, how we’re teaching our future generations about the ethical use of AI, and the way to approach AI so that it ultimately is a net positive,” Hanby said. 

“I think that the pros most certainly outweigh the cons provided we have the right structure around the education component of AI. But, more so now than ever before, we have an incredible opportunity with knowledge at our fingertips that hasn’t been seen since the early days of search engine capabilities, right? And the ability to get a quick overview on pretty much anything that you can think of or ask about in very short order is going to be tremendous for education in general.”

Related to AI use in educational settings, one must keep in mind the dangers related to relying solely on AI as an information source. Just like its human counterparts, AI can provide false and misleading information from time to time and is only as accurate as the data it is provided from which to learn.

“The research opportunities and the learning opportunities using AI are tremendous, that is, provided the AI is not lying to you. And we’re starting to see some of this come out in the news about some things generated using AI not necessarily being one hundred percent factual,” Hanby said. “And so, I think that there is an extreme amount of importance that we need to place on educating the future generations around how much do we really trust what the AI is saying? And how can we put mental safeguards around the way that we approach interactions with AI to not trust it implicitly?”

Of course, one of the key elements of understanding AI is the realization that it still must be fed the initial data in order to generate content. Just like a diet full of junk food affects one’s health differently than a diet of lean protein, AI is very much a product of its diet.

“Whether you’re a comms leader, or you’re somebody in government looking at how you can best engage with and communicate with citizens, I think the most important thing is that from a data perspective you have to have your data house in order. You have to be able to manage your content in a way that’s modular and flexible. You need to provide the right information to actually train these AIs to accomplish whatever they’re trying to achieve,” Hanby said. 

“Garbage in, garbage out is a common saying. So, having a catalog or a library of the top 500 or 1,000 things that you’re going to say—that you can give to the AI as training data to build off of— that’s the way to do it. Because as soon as you start having erroneous data in there, or old or outdated information that’s not accurate, you’re suddenly going to get a much more inaccurate and poor experience when it comes to the expectations of what that AI can deliver for you.”

Although AI has been around in one way or another for years, the recent rise in the level of attention being paid to the subject has many people trying to play catch up to join the AI bandwagon for fear that they may get left behind. Others are still trying to reconcile stories of AI run amok on social media, with stories touting AI as a tool of inclusion versus a tool of division. 

“I think that over the next couple of years we’re going get a lot more clarity around where the value in AI is going to lead. One example that I’m exploring in my next book, is around the concept of generative AI, which is what the ChatGPT and Microsoft Bing AIs are using to generate text paragraphs or responses based on user input or a question,” Hanby said. “And I think that capabilities like that are very cool and very capable. At least in the world of corporate communications, I think that they have an interesting nuance, which is that you can generate as much content as you want, but people aren’t necessarily going to consume it.”  

The key to using AI successfully, according to Hanby, comes down to balance, timing and audience identification.

“The most important thing is that you’re generating the right information, you’re providing it to the right person, and you’re serving up that content at the right time. And if you’re doing it at a global scale, it has to be in the right language as well,” Hanby said.

One of the potential expansions of AI Hanby is working on involves creating a way to erase language barriers as a means to bring more people together.

“I could be sitting with you here, but I’m not just speaking English, I could also be speaking every foreign language translated in real time with AI,” Hanby said. “I could be communicating with everybody across every language in the world. There’s so much value and opportunity for extending reach of ideas and conversations in that way. Ultimately, over time I think that will become a more critical component to how businesses operate.”

Another example Hanby cited relates to the ability of AI to improve the international travel experience through partnerships with the Transportation Security Administration (TSA) and Customs and Border Protection within the United States Department of Homeland Security.

“Let’s say you’re a Japanese national who does not speak any English, and you’re visiting the United States for vacation. You’re 72 years old and you happen to have a pacemaker,” Hanby said. “Going through customs and the screening processes for any foreign traveler is very stressful. Imagine that traveler being able to select on their phone, while they’re either on the plane or about to go through customs, a video in Japanese that would tell him exactly what he needs to do based on the information that they provided.”

These efforts at bringing together people who speak different languages, respecting digital identity, addressing cyber security concerns, and complying with government oversight is one of the most exciting initiatives within the AI world. 

Innovation often outpaces regulatory and governance of new technology. Both the internet and social media were founded with ideals to bring people together. Yet, in many ways, they have grown into something else entirely, while laws meant to protect users often struggle to keep up.

With that in mind, how can users and creators of AI be guided to use the technologies responsibly? For Hanby, the answer to that is tied to market conditions and awareness.

“I think the free market is going to determine a lot” Hanby said, “especially now as people are becoming more aware of their own data footprints and the information that organizations collect about them. There’s an inherent caution. People are going to vote with their dollars in terms of which companies they’re going to engage with and which companies they’re going to support.”

Hanby added, “Ultimately, organizations that are doing things in a responsible, fair manner are going to win,” Hanby said. “I can speak from my personal experience running, Iternal Technologies. We’ve made a lot of deliberate decisions around the types of partners we’re working with in the AI space, specifically like working with NVIDIA. They are incredibly focused on ethical use of secure compliant AI technology and building it in a way that puts the user and the individual first.”

In addition to potential legislative safeguards protecting users of AI, Hanby is a firm believer that education about AI is a critical element moving forward.

“Humans tend to trust technology when they don’t know what’s going on behind the scenes,” Hanby said. “You get in your car to drive to work and you expect it to turn on. You press the button to turn on your computer and you expect it to do the thing that you want it to do.”

“But in reality, there are so many things that need to happen, that need to go right, for that to occur in a desirable fashion. Providing youth with an understanding of how AI works as well as the areas in which AI is lacking will maximize the positive benefits of the technology.”

“Believe it or not, we already pretty much run our entire lives with AI,” Hanby said. “We just don’t realize it. I think we’re starting to realize that we are being socially engineered in a lot of scenarios.”

Hanby’s new book has a working title of “AI for Our Future Generations.” It is targeted for a late 2023 release.

“Regardless of what type of phone you use, or what type of application you use, when you go to an app, there’s an algorithm that’s serving you content or providing information. And that’s a form of AI. It’s already prevalent,” Hanby said. “I think the most important thing as we go forward is being aware that AI is a tool. It’s a capability. It is technology that exists. Use it responsibly. Use it in the right way, and it can deliver amazing benefits.” 

“Don’t place all of your trust in something that you don’t understand. Make an effort to understand AI, and if you can understand it and ask critical questions, I think that’s going to give us the best chance for continuing on a positive track.”


Ryan Anderson

Ryan Anderson is a former Collegiate Sports Information Director (SID), Newspaper Editor and Reporter who currently works as a freelance journalist. He earned a Bachelor of Arts degree in Journalism from the University of Central Florida with a minor in Advertising/Public Relations. He also earned a Master of Science Degree in Sport Management from Houston Christian University. Currently, Ryan is working on a Master of Arts degree in Mass Communication with a concentration in Public Interest Communication and a Graduate Certificate in Global Strategic Communication from the University of Florida.

John Byron Hanby, IV
John Byron Hanby IV

John Byron Hanby IV is a passionate innovator and creator of better ways to connect people, knowledge, ideas, and experiences. He is the Founder and CEO of Iternal Technologies, a leading AI and automation company serving the Fortune 500. Previously, Hanby founded and ran the top-rated film production company in Austin, Texas, for nearly a decade. He won more than 65 international film awards, including The Academy for Motion Picture Arts and Sciences as Student Academy Award Nominee at age 20, and SXSW Film Festival at age 17. Hanby graduated from The University of Texas at Austin in 3 years as the top graduate of the film department at the Moody College of Communication and holds a 3rd Degree Black Belt from the World Tukong Moosul Federation.

Related Posts