BUNKR | Protect your Life
Cybersecurity To Protect From Malicious AI: Kurt Long Of BUNKR On How To Develop An Effective Product Security Strategy by BUNKR - All-in-one App to protect your life

Cybersecurity To Protect From Malicious AI: Kurt Long Of BUNKR On How To Develop An Effective Product Security Strategy

Cybersecurity To Protect From Malicious AI: Kurt Long Of BUNKR On How To Develop An Effective Product Security Strategy

Cybercriminals are a coordinated bunch, and the response to them needs to be as well. Networking is the key to a successful career in any industry, and the same is true of the cybersecurity field. Cooperating and reaching out to others to share solutions forges new possibilities both in defending against evolving threats and in one’s personal career.

The era of malicious AI presents a unique set of challenges to organizations, including the escalating need to identify vulnerabilities and minimize security threats to their products. How do Chief Product Security Officers (CPSOs) prioritize risk management and mitigation to safeguard their organizations in this new frontier? As a part of this series, I had the pleasure of interviewing Kurt Long.

Kurt Long is the co-founder and CEO of BUNKR, an app dedicated to providing users with a streamlined digital experience and unparalleled privacy. He is a successful entrepreneur with more than 25 years of experience starting, growing, and building financially successful information security and privacy businesses. Long is also the Executive Director of the Long Family Force for Good Foundation, which focuses on supporting non-profits dedicated to improving the mental health of children and families. Prior to establishing BUNKR, Long founded FairWarning, a company that pioneered privacy monitoring for patients in the healthcare industry. Under his leadership at FairWarning, privacy monitoring became a staple for healthcare organizations around the world, protecting vulnerable patients from cybercrime and strengthening trust between patients and their care providers. FairWarning secured more than 250 million patient records for healthcare systems around the world. The companies Long has founded have gone on to protect entities holding more than $1 trillion in assets, including global banks, wealth advisors, and other financial services-related businesses. Previously, Long started his career at the Kennedy Space Center as space shuttle databank mission lead for the Hubble Space Telescope and numerous other missions. Additionally, he has been granted nine patents in privacy and security technology globally and holds a master’s degree in theoretical mathematics as well as a bachelor’s degree in business.

Thank you so much for joining us in this interview series! Is there a particular story that inspired you to pursue a career in cybersecurity? We’d love to hear it.

When the internet became popularly commercialized in the early to mid-90s I made a decision to leave the corporate world and start my own company. I was already an expert at writing software for inter-computer communications, so the transition to applying these skills to the newly mainstream internet was natural. My small company was pulled very quickly into some of the most exciting projects happening in the world at that time, including one that would be the first to put telecommunications bills online for GTE, which is now Verizon. As part of this project, we were asked to collaborate with multiple security labs in Boston which were staffed by some of the greatest security minds available. This project also saw me spend a lot of time with security product managers at Netscape Communications in Mountain View, CA. These endeavors — which carried the pressure of working alongside foremost security experts and delivering on behalf of prominent clients — prompted me to take my pivotal first steps toward a 30-year career in information security and privacy.

Can you share the most interesting story that happened to you since you began this fascinating career?

I’ve been a part of so many incredible undertakings that it’s impossible for me to pick a single story that stands out as the most interesting. I had a lot of personal responsibility as a fresh university graduate working on space shuttle missions that proved historic; those were especially formative experiences. That work came with the strain of achieving perfection by necessity, dealing with steep setbacks, occasional high-profile failures, euphoric highs, and a sense of pride that will last a lifetime. While working in the space program, I was mentored by a generation of people who were raised on farms, fought in wars for the United States, and received degrees from several of our best universities including MIT, Georgia Tech, and UC Berkeley. These men and women expected everyone to step up and deliver on the high stakes the nature of the work demanded — it helped instill a work ethic in me that I carry to this day.

As for an anecdote pertaining to my present line of work and today’s high-tech world: In the early ’90s I started OpenWeb, one of the earliest internet software companies of its kind. This is when everyone rushed to get on the internet and establish an online presence, so we serviced every possible company and type of person you could imagine. This included global dating sites, a nunnery, a fair share of charlatans I was fortunate enough to dodge, high-profile celebrities and comedians, realtors, lawyers, and Fortune 500 companies. As I mentioned earlier, eventually we found our sweet spot in developing security software for major corporations. We commercialized this software and sold it globally to some of the biggest companies in the world, including Halifax Bank of Scotland, EMI Music, Roche Pharmaceutical, and BlueCross Blue Shield. Within two years I went from being a software engineer at IBM to owning my own company and traveling the world in service of selling, securing, and supporting some of the world’s most prominent businesses. Those early years of the internet laid the foundation for everything that has come into my professional life since; it has been invigorating and rewarding to actively participate in an industry that has made such a dramatic impact on the world.

You are a successful leader. Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?

Listening, Persistence, and Integrity.

I place an incredibly high value on the skill of listening in my personal life. The most effective leaders are skilled listeners who respect the importance of listening to their customers, understanding their needs, and learning how to provide them with the greatest possible value in order to improve their lives for the better.

Likewise, I’ve found that no matter who you are, creating and running a business is not easy and you’re bound to run into challenges and failures along the way. Persistence is what sets successful entrepreneurs apart from the pack; it’s the quality that empowers people to stay focused on their goals, learn from their mistakes and setbacks, and apply those lessons moving forward.

Finally, integrity is the cornerstone of trust. When businesses consistently demonstrate honesty and ethical behavior, they’re able to build a deep relationship of trust with customers, clients, employees, and stakeholders.

Are you working on any exciting new projects now? How do you think that will help people?

I am working on several exciting projects, all of which are centered around the idea of valuing our God-given humanity and creating trust in the world.

On the non-profit side, the Long Family Foundation is partnering with Voluntas of Copenhagen, Denmark to deliver the world’s first Youth Meaningful Index program. This is an ongoing project where we are surveying children to discover what provides them with the most meaning and satisfaction in life and in turn provide them with what they need to be empowered. So far, the results have been extremely encouraging. We are using AI to interpret and categorize the results into classifications based on what’s most important to them: these categories include Connection, Play, Learning, Creativity, Solitude, and Nature. Each of these groupings carry accompanying details that can be provided to teachers and parents. The findings of the program have already been shared and used by schools in Europe; the next phase of the project is to scale the results throughout the world, including within the United States.

The business project that’s dominating most of my time and that I’m most excited about is called BUNKR. It’s a multi-faceted mobile app, but at its core BUNKR is an easy-to-use messenger that provides bank-level security to its users at all times. BUNKR’s unique patent pending design eliminates the possibility of customers having to deal with imposter fraud, phishing attacks — SMS-based or otherwise — and AI-generated messaging scams. BUNKR was designed to provide users with the highest degree of trust while complying with U.S. and international security regulations, including SEC recordkeeping requirements. BUNKR also includes an easy-to-use password manager, secure cloud storage, and a notes function. As accessibility is one of our top priorities, the app is available and cross-functional across iOS and Android devices. Because BUNKR’s first principle is that privacy is a human right, we never sell our customers’ data under any circumstances.

BUNKR is designed for use by legitimate individuals, families, groups, and businesses — unfortunately, the design of popular “secret” messengers such as Signal, Telegram, Tox, and others support illegitimate activities including organized crime, insurrection planning, and imposter fraud operations of all kinds. Many of these messengers subscribe to the tenet of Privacy at all Costs, which allows these criminal undertakings to operate unimpeded at the expense of everyday, above-board users. By pricing BUNKR at 99 cents, we are providing the world with a digital messaging alternative that’s both affordable and secure. We are currently used in 32 countries around the world and are growing fast.

Ok super. Thank you for all that. Let’s now shift to the main focus of our interview. In order to ensure that we are all on the same page let’s begin with some simple definitions. Can you tell our readers about the different forms of cyberattacks prevalent today?

Absolutely — cyberattacks come in various forms, each targeting different aspects of information systems as well as their users. One of the more pervasive types of cyberattacks is phishing, which involves tricking individuals into providing sensitive information such as usernames, passwords, or financial details. The perpetrators usually accomplish this by posing as a trustworthy entity through emails, messages, or websites. There are also Man-in-the-Middle (MitM) attacks, which are designed to intercept and alter digital communications between parties without their knowledge.

People should also be aware of social engineering tactics. This is a more subtle method of targeting and exploiting potential victims online wherein cybercriminals manipulate individuals to divulge sensitive information or perform certain actions. Ransomware is another widely used variety of cyberattack that’s been deployed against individuals as well as private businesses and public entities. These attacks encrypt the target’s files or entire system, rendering their data inaccessible. Attackers then hold what they’ve seized hostage, withholding the decryption key unless a ransom payment is made. Another type of attack is credential stuffing. This practice exploits users with poor digital hygiene and who may re-use their log-in credentials across different platforms and websites: with credential stuffing, cybercriminals use automated programs to run stolen usernames and password configurations on different accounts. When it works, it’s often true that the same log-in info will work for other websites as well — for cybercriminals, this is a proverbial jackpot.

Individuals and businesses need to be able to differentiate between these types of attacks and implement appropriate security measures that address the respective vulnerabilities associated with each one.

As a CPSO, how do you ensure the ongoing monitoring and detection of potential security threats posed by AI systems? What tools, technologies, or processes do you use to stay vigilant and respond promptly to emerging threats?

Zero-trust architecture is foundational for a CPSO. The zero-trust security framework refers to a model of designing products and supporting infrastructure that requires every user and system to prove their identity before gaining access. Furthermore, zero-trust models function on the assumption that systems will eventually be compromised and incorporate security measures such as encryption even on the “inside” of a product and its infrastructure. These are difficult mechanisms to retrofit into an existing security protocol, so it is critical that any new products have zero-trust measures built-in on day one. This approach is very different from the old orthodoxy surrounding security: in the past, proverbial castle walls were built around a product and everything within its perimeter was trusted. Zero-trust security breaks from the trend by assuming there will be bad actors within the walls.

An alarmingly high percentage of breaches stem from “human error,” with some estimates placing it as high as 90%. A product does not operate in isolation — it is complemented by infrastructure, support services, administrative users, end-users, and internal users. Because there are so many parties and corresponding variables in play, this means that products are very vulnerable to human error which can result in major security compromises. As such, not only is it important to ensure the product itself meets security and privacy standards, but that the entire support environment surrounding it — including the people using it — also meets those same standards. The only way to do this reliably is through certifications like SOC 2, ISO 27001, PCI, FedRamp, and others. The CPSO and CISO must decide which standards meet the business’s needs the best. The most important advice I can give regarding certifications is that it is absolutely possible to comply with standards and still have security gaps. Focusing solely on certifications does not guarantee security and privacy; while it helps, maintaining security is an ongoing endeavor that requires consistent due diligence. Cybercriminals don’t rest, and neither should the people trying to keep their digital assets safe.

With the increasing use of AI in various industries, how do CPSOs strike a balance between maintaining security and enabling innovation? What approaches or methodologies do you follow to ensure security without stifling technological advancements?

The principles discussed above are foundational to all digital products, including those that use AI. Engineering safety within the AI space ultimately means guaranteeing the safety of the users engaging with the product. Refinement in this area should be driven by your product’s use cases as well as the ethical considerations involved. The most dangerous use cases concern those where AI is used to make life-impacting decisions without incorporating any human input into the final decision. Not only do these types of use cases run the risk of negatively impacting people’s lives, but they can bring financial and legal exposure as well as negative media attention. To name an example, AI has been used to provide parole guidelines in sentencing proceedings. Upon scrutiny, it was discovered that the AI models and associated test data carried obvious biases that impacted certain demographics in a disproportionately negative manner. This is an instance where AI negatively affected people’s lives, consequently bringing risk and exposure to the developer. My advice for entities looking to deploy AI is to start conservatively: Carefully consider use cases, and if the product will be using AI to make life-impacting decisions, make sure there is a human with relevant expertise in the loop.

CPSOs face the challenge of striking a balance between maintaining security and enabling innovation, especially within the context of AI’s increasing prevalence across industries. The aforementioned ethical considerations must be kept in mind when weighing the most suitable methods for implementing AI — below are some of the best practices I’ve used for the better part of 10 years when navigating emerging technologies:

  1. Maintaining a Human Element When Using AI

AI can be used to augment human capabilities rather than replace them. A human should always be overseeing and managing AI technology at every step of the process from implementation, maintenance, and decision-making to ensure ethical standards of business.

2. Transparency

Human involvement ensures transparency in decision-making processes. This is crucial for building trust, especially when AI systems impact individuals’ lives or sensitive areas such as healthcare, finance, or criminal justice.

3. Non-Biased Data

By using non-biased data, AI systems can build trust, uphold ethical standards, and contribute positively to diverse and open-minded decision-making processes.

4. Balance Speed and Security

Balancing speed and security in AI development is crucial for harnessing the benefits of rapid innovation while safeguarding against potential risks. Striking the right equilibrium ensures that AI systems can adapt quickly to evolving needs without making compromises on robust security measures.

5. Human Decision Making

Humans should ultimately make decisions in AI decision-making processes to retain ethical oversight, exercise empathy, and consider the broader societal context the product in which the product is operating. Human judgment ensures accountability, adaptability to unforeseen circumstances, and the ability to address complex ethical considerations that AI systems may not have the capacity to fully comprehend.

Collaboration and information sharing among organizations are crucial in combating security threats from malicious AI. How do CPSOs foster collaboration within the industry, both in terms of sharing threat intelligence and developing common best practices to protect against evolving threats?

It’s imperative that CPSOs foster a culture of collaboration among one another within the industry. Because security threats are constantly evolving, it’s necessary for CPSOs to effectively compare notes and devise solutions that can be widely shared and incorporated into different practices. This begins with establishing platforms and open lines of communication that can track these threats, document notable incidents and their particulars, and scan for larger trends, Additionally, CPSOs should be working to establish agreed-upon frameworks and regulations that can be used by regulatory bodies to better rein in AI-fueled cybersecurity threats.

Can you share a real-world example where an organization effectively prevented or minimized a security threat from malicious AI? What measures did they take, and what lessons can other organizations learn from their experience?

AI-based attacks will continue to evolve, but the basic form that most people have to worry about is imposter attacks. These come in many varieties, including sophisticated phishing attacks using email or SMS phishing attacks conducted through text messages. However, the game changer has been deep fake attacks in which cyber criminals use AI to generate a voice that is nearly identical to a known person. Anecdotally, deepfake voice attacks have become sophisticated to the point where an otherwise savvy chief financial officer wired money to a fraudster because the “voice” they were speaking to was that of the CEO of the company they worked for. The actual CEO was on vacation, and the “voice” said the money was needed for an emergency. The cybercriminals behind the scheme had also monitored other communications to learn where the CEO was going on vacation in order to boost the credibility of their deepfake. Executives at businesses with access to considerable funds are under daily assault, and they must be aware of this developing dynamic.

Criminals are also using AI-generated voices to perpetrate schemes where victims are made to believe that their child has been kidnapped and will be harmed unless a ransom is paid. Here, the AI is used to recreate the voice of the child. This variety of fraud is frequently directed at grandparents, as they are generally more susceptible than younger, possibly more tech-aware parents.

There is no singular, miraculous technique or process that will stop these kinds of AI-aided imposter attacks. In this new age of tech-driven fraud, it’s critical that we all second guess any digital interactions that call upon us to do things like wire money, divulge security information, or provide personal information. It has been documented that massive breaches at casinos in Las Vegas resulted from imposter-based calls to the casino’s security center which convinced personnel to share access credentials.

Other best practices include using a family-wide shared password to identify yourself, calling back the caller from another number, or even using technology that utilizes a selfie combined with a valid identification to validate a caller. The most important thing is to be cautious and make others aware of the risk posed by AI-based deepfake attacks.

How important is collaboration between CPSOs and other stakeholders, such as software engineers, data scientists, and ethical hackers, in minimizing security threats from AI-powered products? How do you foster effective collaboration among these different roles?

These sorts of collaborations are paramount in minimizing security threats from AI-powered products. The synergy of expertise ensures a comprehensive approach to identifying and mitigating potential risks. Fostering effective collaboration involves creating interdisciplinary forums, encouraging open communication channels, and establishing shared goals that prioritize both security and ethical principles.

What are the “5 Things You Need to Create A Successful Career In Cybersecurity” and why?

1. A Passion for Seeking the Truth

Thriving in the cybersecurity space requires a passion for getting to the bottom of things and sharing the truth. This quality is vital, as it drives a relentless pursuit of accuracy and transparency in understanding and addressing security threats. The best cybersecurity experts are also very competitive and leverage this attribute to get the best possible results, problem solve, and flex their expertise.

2. Hands-on Experience and Willingness to Do the Work

Practical engagement in cybersecurity not only strengthens one’s technical skills but also instills a deep understanding of security issues and fosters strong collaboration with others in the process. Security professionals usually work long hours and must be prepared to do so in emergency situations, as they are the last line of defense between the business’s ability to function and cybercriminals. This means they must be able to handle pressure, communicate, work with a team, and do whatever it takes to protect the business and its employees.

3. Continuous Learning and Adaptability

As the threat landscape rapidly evolves, staying abreast of the latest attack vectors and defense strategies is a requirement for a successful career in cybersecurity. Because ill-intentioned digital actors are always innovating new ways to defraud victims, cybersecurity experts need to keep pace with the latest developments, analyze possible vulnerabilities, and develop solutions.

4. Networking and Community Engagement

Cybercriminals are a coordinated bunch, and the response to them needs to be as well. Networking is the key to a successful career in any industry, and the same is true of the cybersecurity field. Cooperating and reaching out to others to share solutions forges new possibilities both in defending against evolving threats and in one’s personal career.

5. Ethical Mindset and Integrity

Given the sensitivity and importance of the information being handled, an ethical approach is paramount. Cybersecurity experts are given an enormous amount of trust, and dereliction of their duties can result in a dramatic degree of harm. If you want to be successful in this field, integrity is a must.

You are a person of enormous influence. If you could inspire a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. 🙂

I’m committed to driving a movement of embracing and valuing humanness as we embrace technology now and into the future. I envision harnessing the power of technology to improve human lives for the better and prioritizing our well-being, safety, and dignity rather than monetizing our most precious moments. New and advanced technologies such as AI should be working on behalf of humans, not the other way around. I am an advocate for responsible and inclusive technological practices that prioritize the welfare of individuals and communities at large, and I will continue working on behalf of that cause.

This was very inspiring and informative. Thank you so much for the time you spent with this interview!

Table of Contents

You might also like

Getting More Value from BUNKR

Unlock the full potential of your BUNKR subscription today! Whether it’s securing your personal data, sharing information with your close network, or mastering our powerful

Was this article useful?
If you found this article valuable, you can share it with others

Related Posts

In this episode, we navigate the ever evolving landscape of messaging app dynamics, examining the challenges and opportunities in striking …
Kurt Long believes in ‘business as a force for good’. He has grown and sold three successful businesses and serves …
Unlock the full potential of your BUNKR subscription today! Whether it’s securing your personal data, sharing information with your close …
Subscribe and stay up to date with the latest security tips and news.
What I liked:
What I would like more