Skip to main content
21 Oct 2025

Unlocking MCP for aviation: 10 Questions Every Curious Mind Should Ask

At Leon Software, we’re always exploring new ways to bring innovation and security together. In this interview with Paweł Olbrycht, we dive into the Model Context Protocol (MCP)—a new open-source standard that makes it easier to connect AI tools with Leon while keeping everything safe and controlled. Paweł explains how MCP works, why it matters for our users, and how it helps us test and integrate AI in a secure, efficient way.

Unlocking MCP for aviation: 10 Questions Every Curious Mind Should Ask

 Section 1: What MCP Can Do for You (and Why It’s Cool)

  1. What exactly is MCP, and why is it important at Leon? (Let’s start simple—what does MCP stand for, and what does it actually do?)

MCP is, as we can read on the official website, “an open-source standard for connecting AI applications to external systems” (https://modelcontextprotocol.io/docs/getting-started/intro). It is a protocol that standardizes the way AI can be connected to various services, external APIs, or other tools, allowing them to be used for example via “AI chat”.

  1. How can MCP help bring AI agents into Leon’s system?(Can you give an example of how an AI agent might be used with MCP?)

Let's look at the MCP server as something that knows how to communicate with, in this case, Leon's API to achieve specific results, e.g., search for flights in Leon. Thanks to the fact that the MCP server knows how to do this, we don't have to create a complex integration between our AI agent and Leon's API. All we have to do is connect our AI agent to the MCP server.

  1. Does MCP make it easier and faster to try out new AI tools?(If we wanted to experiment—would it speed things up?)

Yes, we can compare it to a USB port on a computer. If the computer has such a port and the device has a plug, we simply connect them. The same applies to the MCP server—if our AI tools can communicate with such a server, we can easily and quickly connect them to our MCP server.

  1. How does MCP help keep different parts of the system separate but working together?(Does this mean AI stuff doesn’t break the main system?)

Through the MCP server, an AI agent can only access what the API key allows access to. However, it is worth ensuring that certain actions require manual approval by the user so that the AI does not accidentally mess something up. Of course, it could only mess something up within the scope of its permissions, e.g., if it has access to the phonebook, it could add some contacts, but we definitely don't want the AI to perform potentially unwanted actions without our approval, right? Such approval is already integrated into our AI chat tool within Leon.

🛡️ Section 2: How MCP Keeps Things Safe and Secure

  1. How does MCP make sure AI agents don’t go too far or access things they shouldn’t?(Is there a kind of “security fence” around them?)

The question may stem from concerns about whether AI will, for example, delete a flight when I ask for information related to it because it misunderstands our intentions. This concern is understandable because LLM models are non-deterministic and, as can be seen when using various “AI chats,” they do not always understand us as well as we would like. To prevent unwanted actions from being performed, it is important that our AI tool allows manual approval of specific MCP tools that the AI wants to use. This gives us control over what will be done. This is especially important when modifying data.

  1. Are all MCP activities logged so we can review what happened later?(Like a security camera for API calls?)

Yes, there are logs of the tools called, along with parameters.

  1. How do AI agents prove who they are when they connect through MCP?(Do they use tokens, passwords, or something else?)

That's a very good question. It is important that no sensitive authentication data is passed to the LLM model, as this would risk it being leaked through an attack on the LLM. It is also important, as with any other functionality of any system, that users cannot access data to which they are not authorized.

Therefore, an access token is used for authentication, and this token is not passed to the LLM model.

  1. Can we set different limits for different agents, like how often they can run or when they expire?(Basically, can we control their "power levels" individually?)

As I mentioned earlier, the MCP server does not give everyone access to everything. Access is based on permissions associated with a given refresh/access token. If an API key is no longer in use, it can be deleted. This ensures that no one will be able to use it.

Leon users, other than administrators, cannot integrate AI tools with the MCP server, but they can utilize the AI chat tool integrated within Leon.

  1. How does MCP support Leon’s bigger business goals and how does it impact cybersecurity and compliance?(Where does it fit into the company’s overall security game plan?

The MCP server allows you to integrate AI agents with Leon, which makes it possible to for example check certain information or make changes using natural language, e.g., “hey, add to contacts: John Doe, born January 1, 1970, email .” This is something relatively new and really cool, but new things mean new risks that need to be kept in mind and carefully considered to ensure safety.

There is a lot to say here, but one of the important things is to make sure that AI is not allowed to do whatever it wants without approval. An AI agent can make mistakes or, in certain situations, be manipulated by someone to perform an action that we do not want. Therefore, we should ensure that certain actions, such as modifying or deleting data, require approval. That's why such an approval mechanism has been built into our AI chat tool that is built into Leon.


Not yet a member of Leon community? Contact our Sales team to find out more or jump straight into the 30-day free trial.

TAGGED WITH

Subscribe and Follow Us

Below to Stay up to Date
flight schedule software