Video: AI Agents in Action: Meet Your New Financial Crime-Fighting Sidekick | Duration: 3212s | Summary: AI Agents in Action: Meet Your New Financial Crime-Fighting Sidekick | Chapters: Introduction to AI Agents (4.56s), AML Investigator Assistant (307.66498s), Agent Architecture Explained (459.93s), Demonstrating AML Agent (651.575s), Database Tools Discussion (2555.3699s), Platform and Compliance (2628.605s), Planner Agent Architecture (2749.5598s), Dataiku Implementation Flexibility (2834s), Licensing and Pricing (2949.4202s), LLM Platform Support (3006.22s), Conclusion and Governance (3097.28s)
Transcript for "AI Agents in Action: Meet Your New Financial Crime-Fighting Sidekick":
Hello, everyone. Welcome to our webinar discussion today on AI agents, specifically, the leveraging of AI agents across a, entire AML process would focus on assisting AML investigations from start to finish. What we'll be, covering here today is, going to be an initial, setting of the framework and architecture, both in terms of what, the level setting, will be in terms of what agents will be and how they will be, assisting us, and then what is the specific framework and architecture that Dataiku has suggested here. And then very quickly, we will be moving into a live demonstration led by David, who is our data science business solutions senior manager. That will be the heart of this, meeting in which you'll get a chance to see, the framework and architecture fully functioning inside of Dataiku. You're gonna basically get to see two fundamental elements as a result. One, exactly what the proposed, project will achieve inside of Dataiku and how it would very materially, assist in the AML investigation. And then after seeing that kind of front end component, you'll see the back end and how Dataiku makes it very, easy and extremely safe and governed and secure to build out these kind of systems, and also how flexible they are. What you're going to see here was built entirely in Dataiku, but that therefore means that it could be entirely customized in Dataiku as well. Agents as a whole will indeed be game changers within the FSI space. We can talk about a couple of levels along which that would be relevant. So when you think about an agent, you are moving well beyond the general idea of either basic automation or the idea of some kind of chatbot interface with a large language model. Chatbots can be a part of an entire agent framework as a way for the user to interact, but they are not part of it. In fact, the most fundamental part of an agent architecture is the use of reason in which the agent is making decisions in order to assist the human that it's collaborating with. If we think about this overall, we move away from very rigid multi systems. Those may still be part of an overall price frame, but they are not at the heart of the differentiating factor that the LN agent brings in. Instead, you can move into a reasoning mode in which decisions can be made by the agent to move the, activity forward. The other, element that is, very much impactful and can bring a huge amount of benefit is that the agent is capable of performing multidimensional analysis. This could be analyses across many different fragmented datasets that are all structured. It can also be work across unstructured datasets that might be either inaccessible or extremely time consuming to engage with in their current form. And, also, as you will see later, it allows for interactions with datasets that go beyond a simple table or a text document and into areas of significant value like graph analytics. And in all of this case, this is about a collaborative experience with the human agent. The human agent remains in charge of the entire investigation and process, and they are being assisted at each stage by various agents working together with them. In terms of the benefit that agents can bring, there are their capabilities we just discussed, but the benefits are rather straightforward and can be immediately, evaluated simply by watching them in action as you'll see shortly. You will obviously have significant efficiency. The agent is capable of tirelessly pouring through a very large amount of information, be that a graph, be that databases, be that text documents, in order to extract potentially relevant information and then summarize it. As a result, you're able to move across a vastly larger sweep of data and many different types of data than would have been plausible before. That approach can be extremely consistent in its nature. So you are able to ensure that once you are comfortable with the agent arrangement, the actual manner in which it is performing those behaviors is extremely robust and well understood. All of this, of course, can improve general satisfaction for your agents, but also significantly reduce fatigue in those agent those human operators. You will be reducing things like alert fatigue and also allowing them to focus more on meaningful and relevant information using the best of their abilities. Within financial services broadly, there are, of course, many different potential use cases. Here's a very small selection, that we could speak about perhaps later on. Any questions, comments, or, thought as we move through, feel free to leave them in the chat or in the dedicated question and answer section. All of that will be tackling for the demonstration near near the end of the conversation. In this particular, discussion, though, we're gonna move into a conversation about this AML investigator that will assist, a human operator in performing their investigations. Now at the high level, this investigator performs exactly the kind of elements that you would want. It is enriching a lot of the analysis that would have already been performed. It is taking some of the most mundane tasks or tax tasks which are very difficult to scale and allowing them to be formed at very large scale and extremely efficiently and quickly. It's also, though, using its decision making capabilities to help rank the information rather than providing simply one massive amount of, analysis covering every possible opportunity, it instead sensibly looks through those opportunities, evaluates them, and then decides which ones should be prioritized and thus brought to the attention of the investigator. So that surfacing, that decision making in terms of what is most important, which avenues to pursue, is extremely beneficial, far more so than simply cranking through a very, very wide array of possible pieces of information. It will also generate supporting material that can be useful. So if, for example, it is plausible that a SAR escalation might be necessary, a packet could be produced providing a lot of the basic information that would be necessary for that SAR to be completed that the operator the human operator could then work with. So in all of these cases, the human operator is being assisted. But just as importantly, the human operator remains in charge. They are essentially managing that process. They therefore have, at every opportunity, the ability to dive deeper or to pursue areas that the agent did not pursue. They can do this either via the chat interface or by going into the dedicated components that are essentially the human equivalent of what the agent had, engaged with, like graph or data analysis. If we think then about the kinds of areas that this would be addressing specifically, not to the entire industry, but to this very particular challenge. You, of course, will be reducing the load on investigators, allowing information that might be in very disparate systems and different forms to be very rapidly brought together and accessed. You will be able to apply a very consistent process and, of course, therefore, reply more effectively to pressure that you receive. All of this together allows the human operator to perform at an even higher level than they already were with greater confidence and efficiency as well as with greater focus. If we then look at what the actual architecture of all of this is, I think this is one of the most important components. As you, think about an agent in operation, what you are essentially needing to understand are the ideas of interfaces, so the way in which the agent is, engaging with the human operator. You need to think about each individual, agent as perhaps performing in an ensemble in which a overseeing agent is leveraging the work of other sub agents, and those agents leveraging tools. The tools are therefore able to perform specific things that need to be achieved. This breakdown, this kind of simplifying framework, is exactly how Dataiku allows agents to be built. So as you see after the demo of the actual, it's human experience, you'll be able to see all of the material under the hood in which these components like tools, agents, and then an ensemble architecting agent on top, all behaving together in a very well governed, well managed way. As you can see here, these are also modular. So you can allow a agent which has a general data search capability and is looking across many different systems that couldn't themselves be disconnected from one another, but it's bringing them together. You have another agent that's dedicated solely to graph analysis and is therefore extremely efficient at that one particular task. And then yet another agent that is looking for politically exposed persons and otherwise investigating the kind of human component of, the potential risk profile. All of this then being brought together into a summary report that is very specifically graded and decided upon by the overarching AML assistant agent. The agent will decide what components should make it into that report and in what order they should appear, and all of those components will explain themselves and then can be dived into deeper, as I said, either in a chat interface with the agent or by having the human go directly into those systems, as you'll see. All of this allows the investigator to participate in the process. They are not being given a decision. They are not being told what to do, nor are they just giving some massive dump of thousands and thousands of data points and possibly vaguely useful information. They're giving a refined packet of usable material that they can then interact with and move forward from. All of this also is supported by a separate agent, which basically prepares a potential star packet, which could be leveraged then by the investigator. Again, this is simply allowing them to move quickly, and allowing them to make the most fundamental decisions about how do that action be alert. We'll move now into a demonstration led by David, in which you will be able to see all of these components, operating. You will see both what it would be like to be a human operator engaging with this workflow and how you will be supported, and you will then be able to see how underneath the components are well managed, properly architected and governed, and secure so that you can understand how this could be built out, modified, or expanded into many different areas. And with that, I'll hand this over to David. Thank you, John. So let me share my screen and show you the demonstration of what we just talked about. So and see this, this agent in action right now in, the multiple ways, it can be leveraged in the tech. So first, as we as we said earlier, this agent in a in a in a flow in the tycoon can be leveraged, as a batch process, meaning that, we imagine a scenario where we get some qualified alerts that come into our system. And, each day or at the frequency of our choice, we will, trigger the alerts reporting, system, that will take this in these individual alerts as a input and, start investigating on their own and produce a report that will be available for the investigator as a starting point for, for the investigation. So using all the tools available and described previously, the agent has, looked through some potential leads, through some data, and and, generated a first, investigation report, from which we have a summary here, in an email that is, automatically sent to the investigator at the end of, the generation of this, report. So here we have just for these four alerts, a very short and understandable summary in two sentences of what has happened for the specific alert. And if we want to go into more details, we can log to our data echo dashboard and, go through, the, the report in itself in more depth. So here in this first page, I am seeing, the first alert, with on the left, the initial report that has been generated by, by our agent, and we'll go into some more details about how it's built. And on the right, I have my other capability, my other way of calling my agent that would see also later, which is an interactive chatbot that will still leverage the same architecture that we described earlier, but, in a in a real time interactive way. So to start with, the report on the left, has some structure that has been user defined. So we have some, overall, information about the alert itself. As you can see, the rule that was triggered to generate it, the transaction ID, some account numbers for the originations of the target account, and the the short summary. But when we go below, the agent actually has used his reasoning ability, and the the tools are accessible to to, to it to, generate a report that is really targeted to the the leads that it have found, during its initial investigation. So we can see here that we have a table of content that is not generic, but is really, depending on, the information that were found by, by this by the agent on this specific alert. Now if we go on the second one, and this one will go into more details about, the content about this report, we can see that, previously, we had, three subpart in addition to the initial summary, and in this one, we only have two that were identified as interesting leads to, to be investigated by, by the agent. Now if we go a bit below, we can see what, what has been, found and, and, and raised by the agent while doing the investigation. So as we saw in the architecture slide, the agent was able to find some overall information about the customers, at the origin and at the target of, of the transaction that was, suggested as an alert. And this information, summarized here, with some additional reasoning based on whether or not this information should be flagged, in some particular way. So as we can see here, the first layer of investigation has found that there was some abnormal number of, transaction. So very low number of transaction, for this, for this company in particular. That might be suspicious. We also get some recommended for further investigation, and we can see that, in a way, it is possible to iterate using the agent and build some complex architecture that still uses this modular component, but let them dialogue between them to, to build some more complex investigation. If we go a bit below, we have some info some information about the two counterparts of this transaction, the account number that they have, the balances, the currencies. Here we we have also, a key shareholder that has been identified that might be, of interest. Some other risk factors can be found. And, since we are leveraging, a graph exploration tool, We also have in the report, the graph itself that has been, that has been shown to the end, really added to the report. So what happens is that, when the, the agent is, is investigating by using the graph tool, It generates some safer queries, some some some queries that are specific to the graph database that contains the information about transactions and, holdings, etcetera. And it surfaces this information back, to, to the, orchestrating agent that can then input this information as a final report so that as an investigator, I can really start from what I see here and and get some direct visual insight, and and trust what the agent has been, has been generating. So for example, here, I have this, Steven Monroe individual that has been, found as a as a key shareholder of, my target account company. And what I can do if I want to investigate a bit deeper is, on the right hand side, run some additional investigation on that particular individual. So this is what I had done here. I want to just, like, add some additional information to, what my report has given me, in a in a more static way. So I asked, some some, information about the risk of the individual the individual, Steven Monroe, and I see that my agent is looking through all the data and the tools that it has at, its disposal to provide me with a summary of information about, about him. So I have information about, its, personal profile, its business connections, eventual, eventually, if it's politically exposed person or if some some news were found that are particularly interesting for that individual, financial activity, etcetera. And based on that, and based on the on the prompts that we provided, to the agent, it can, summarize these findings and and produce a risk assessment. And so as a as an investigator, I I get to to understand the data that was used to, to generate this this assessment plus the conclusion made from, from the agent. I can also have access to the Cypher queries, like, previously if I want to to dig a bit deeper and and even to the news that were that were found during the the web search. Now if I if I even want to to go even deeper and to to explore the graph further than, than what has been done so far. I can also take the this query back into our graph exploration tool in the and see for myself if I can find some other patterns starting from, this query that has been generated. So for example, looking at, my original company, I can, with a click, expand, to see its neighbor, which are the bank accounts. This was the bank account which had a a suspicious activity. I can see it here. The transaction was to this bank account, but I can look at, the other bank account that received a transaction from, from this, from this company. And, going through there, I can see the complexity of that I can get, in terms of, of transaction. And this is where, the agent can be really interesting to analyze these, like, webs of transactions and, eventually, get some some, analysis on the distribution of the amounts of the frequency, etcetera. And it's also one of the values of, having agents running these kinds of analysis because, as you can see, it's a very interactive process where it's hard to know in advance in which direction you want to go. And, you need to to go in the direction in a interactive way. And from each step, take decision, use some reasoning to see in, in which other direction you want to go. So for example, thinking about, these transactions that were that were made I would see. And when you get, like I said, to, another node that might be interesting, you can always, come back to your to your conversational interface. And, like I did before, when analyzing an individual, analyze a company like, like I did here and, get some information similarly to to what I had before. And, particularly, looking at, the transaction and the bank account structure, I can see here, for example, for this company, which has five bank accounts, it could be hard to identify potential transaction patterns, and I can see how, like, tools given to, to such an agent, can be very helpful to have a first view of, potential red flags in terms of, the the distribution of, the the transaction that can be hard to to pinpoint and to see, from a graph perspective. And still, as previously, from the data that has been retrieved through the agent, I get an assessment, and I get my sources that I can then, go through and, and and use to do my manual investigation, on my side. Now, going to, my third alert, I can see again that my my structure is also different here. So in particular, the agent has identified some risks related to cross border, cross currency, transaction patterns. And we can see that really depending on the type of alert and the the type of insight that it generated, it can go in in two very different directions. So for example, here, it really focuses on the the cross currency transaction and the fact that, there were some high risk classification on some accounts, that were found, in their connections. Still, as previously, we have access to the safer queries and are able to really source the information that was generated in this report. And, if at some point, in our analysis, we feel that we have reached a state where we want to, like, prepare for a SAR, so a suspicious activity report, we also have the ability to generate, a a preparation report that will take, this information plus all the the necessary information that is needed, to fill in such a report and send it by email to, the relevant stakeholder who will be able afterwards to review this information and eventually file it, for him. So you can really see, how both these components, they leverage an agent that is really, that has the purpose of accelerating and being an assistant to to to to give more power and and more tools available to to the investigator. So, for example, here, for the the sound that was prepared, it, from this command, generates a sound preparation document that I can see afterwards in here that contains, information about, the alert itself, some some IDs and some amounts that can then be used for for the actual, SAR filing. So now what I'm going to do is to show you a bit under the cover what is going on to, to generate such a report and build the the the overall agent architecture. So this is part of the flow that we have below, the flow that contains both the data, the logic, and the agent architecture, to, support this decision making process. In this box here, I have, a view of the five agents that I have in my project. So this one, which is the overarching orchestrating agent, and four individual ones, which are all specific to one subtask of this report generation and, AML investigation assistance. If I go inside this agent, I have I can see here the screen, which is a a really simple way of creating my own agent. So these are called visual agents, meaning that you don't need to write any line of code to, create them and to really leverage some complex tools and put them into a into a process. In this screen, I have chosen my, LLM. I have, given it a prompt, and then I can select some tools that will be accessible and understandable through some description to my AML investigation assistant agent, and to be used, when deemed necessary by, my orchestrating agent. So these are all visual tools. I can add a tool just like that and, add one from my library of tool that has been created, in this project as well. So you can see how simple it is to add some new tool if necessary. And this is also the power of the agent architecture in that is that it's very modular in a way that here, for example, these four agents these four tools are actual agents that I have connected back to my, orchestrating agent. So I can update this individual agent without changing, my, orchestrating agent. And I can improve them individually, and I can also use them in other projects if, if I need to. If I go in my tools screen, I can have a look, improve, configure all my my individual tools. If I have a look, for example, at my, maybe not this one, at, my, k y search tool, this is a really simple dataset lookup tool. So I have my golden data source that has been, shared in this project, and I can, look through it with an ID and get the information. I also have some, some knowledge bank search. So I have, unstructured document that are in, in PDF, for example. I can, with visual tools, put this information into a knowledge bank inside that and then access these knowledge banks through a tool that I connect to an agent. And, and I can also configure some, emailing tool, that really make all this process completely automated. So if I want my agent to send some email or some Slack, some Teams, it's also possible easily. Now going back to the flow, I can see that's that this agent is being used to, in some recipes, so some data transformation inside the flow, to, to transform some data and to, to augment it. So the data that I have as an input is very basic. So it's release the input to alert that were generated by my system. And from there, I have the ability to call my agent, in a static way and have it, have it being called through a specific, additional prompt. So specific to the data that I am sending to him and to, the task that I want him to perform. And based on, that information that prompt, the alert ID, it will use the tools as necessary, to generate my initial release. I'm doing a few steps of engineering then to to to search into some more direction and generate the initial reports. And, the the following steps are there to reformat a bit the report, add some titles, etcetera, always using a combination of, visual tools of LLMs in a standard way and sometimes some code also to, integrate some of the images. And what is also interesting is that not only can I, leverage my, my, my agent inside the flow like I did here? So I can leverage it inside the flow and automate the, the call of this agent like I did through a scenario. So this is the email that I showed you at the beginning. Basically, as my input dataset changes or, for example, at a set frequency, I can have this scenario run. So what it will do is that it will call the agent, in the flow like what we have seen before, build the reports, and when it's done, it will send an email with the the the summary that, we had seen before. But not only can I use it inside the flow like this, but also through the interface that we have seen here on the right, in an object called agent connect that, can be used to, to have conversation with this agent? And one of the of the important part of, of, Agent Connect and all the calls to any agent is that we want to keep track of how, the agent was called and what kind of of responses did it provide, and evaluate these responses, and iteratively improve the quality of these responses, through, through additional, additional work. And this is what we can do with, our other tool called the traces explorer, which, as you can see here, we can, look through all the question that we ask to our agent and see, exactly what happened. As you can see here, it's it can be pretty complex, and, in particular, when you have a a agent architecture leveraging multiple layers of agents and tools, you still want to understand what is the sequentiality of what was called and why, and this is what we can provide here. So not only we can see, in which order and, with which prompt, each of these agents will call. So, for example, here we had a a a thinking phase before going into, the craft exploration tool, which is one of the sub agent of the orchestrating agent. Then thinking again about what to do next. So for example, here saying that it would check, the PEP, the political the exposed person status. So then calling the PIP agent, etcetera, we can really have a sequential view of what is going on. So this is a really important tool, this visual traces explorer, to understand what is going on, to have a overall view of where, what what is the agent doing, and not, like, let it be a black box. We understand what is going on. Have also the cost of my prompt to understand, yeah, to see, like, if if you put it at scale, what would be the expected cost? Eventually, if there are some trade offs between switching from, more expensive LLM to a cheaper one. And, once you have also a lot of processes going on, trying to optimize the time spent, And this can be done by seeing here, which are the steps, the calls, the tools, that might take a lot of time to be, to be to be performed. And, basically, all of this is to have really a governed way to, to use LLM and agents, in the platform in that, because, as these agents are going to perform more and more tasks, going further, you really want to make sure that they perform them in, in the way that you have designed them to do so and in an efficient and cost effective way. And, with this, I think I'm going to to unshare my screen, and we can move on to the q and a session. Excellent. Thank you, David. I'll bring up a a little slide here just with some contact information, and someone will begin to kinda round out the conversation. So I really appreciate folks, an asking so many excellent questions on the left, many of which I think we've gotten to. There's one, to say that I was in the midst of answering, the state of this closing app, and we can, reach out from there. I would say that, if you have questions, now would be the time to kinda get them into the the chat on the left hand side. We'll try to tackle a couple there. But as you will see from the screen I'm about to share with you all, you can also, of course, always reach out to us, and we would very much encourage you to do so. So, you can reach out to us at let's get this all nicely done. There you are. So you can reach out to myself directly. You can obviously always reach out, on our website, data.com, and you can engage with us, as you see fit. I think there might be a little bit of background noise as well. I'm sure from my side, are you? Where is the nearest neighbor David? He's he's moving into another room to to free us from the difficult background noise. So we have a couple of questions on the left hand side. I'll try to get to a couple of questions, a couple of them quickly. So in terms of the source systems and the transactions and so on, there's gonna be many and varied. So this particular example used a variety of realistic synthetic data sources, as well as some publicly available sources. In your own firm, you would imagine swapping those out for capabilities that are particular to you. So you may have, for example, of course, your own internal information that would be incredibly valuable and also securely contained, which Dataiku will be able to access without any, additional, risk because we run internal within your own walls. Then on top of that, you might have access to various generic public web searches, dedicated databases that you might pay access to or that you access, generally. And, of course, you might be bringing on new pieces of information as you move forward. In terms of whether or not the, process is entirely LOM based, so this is a great question. As you saw, LOMs are critical components of each of the agentic behaviors, being performed. You absolutely can directly integrate RAG or vector database capabilities. Those RAG capabilities are also very easy to spin up in Dataiku. As you saw from David, you can build out many things in Dataiku entirely using visual capabilities and then place, additional capabilities on top. That would include the setting up of, for example, a retrieval augmented generation workflow in which database, various unstructured information is processed into a knowledge bank, which then the agent or an LLM, more generally, can access and work with. We don't particularly show that in this example, but you can absolutely imagine that being another component of this capability. In terms of, some other questions that we're seeing here, and, again, trying to get to them quickly. As I said, yeah, various, and and some degree different sources. And, of course, anyone who is interested in, following up from here, you can reach out to me or, to the contact information associated with the invite. And indeed, Dataiku offers both self-service trainings and workshops where you can see all of this in action. Of course, if your firm is interested in leveraging Dataiku in this way, or for any of the other use cases you saw or other use cases you might now be imagining you might want to pursue, you can, of course, reach out to us and Dataiku could set up demonstrations, workshops, and other abilities for you to engage. Excellent. Seeing some questions again about protocols and so on. As mentioned before, Dataiku is able to leverage any protocols that your teams have decided on, MCP or a to a, but also does not require that those protocols, be used. Instead, you can use Dataiku's existing, flow. Dataiku is agnostic in that sense and highly collaborative with, different approaches. Nice. Hey. David, are there any questions that you're seeing yeah. Please go right ahead. Yes. Forward. There there was a question about, what is the source system for email transaction? So this I think you answered. But on the Graph DB tool that is used, so, the the the tool that we use is something that we are that we built internally that is soon to be released, that is leveraging Kuzu as a graph database and that will integrate also with Neo four g and probably with other, cloud database as well since, we have, a a policy or kind of a a way of integrating with, as many tools as possible when we, we we feel that, they are interesting for for our customers. Absolutely. And, I saw also another answer question about the agentive framework that we are using underneath. So in this particular example, we are using, agent executor, but we also have other examples, leveraging line graph, as our agent in framework. Excellent. I also saw a couple of questions about well, so one is a a nice quick question here about, platform, support, and so on. Yes. So Dataiku is completely agnostic as to which models you use. So, we have built in connectors for essentially all of the major models, out there, and there are multiple versions. You can also easily build, custom connectors to any arbitrary LLM. And, again, those LLMs could be literally public facing LLMs, although in the vast majority of cases for a financial services firm, that wouldn't be appropriate. Instead, you can just as easily connect to a privately hosted LLM in your own cloud or a completely on premise LLM. So some of our clients, working with particularly secure data have their own internal LLMs completely inside of their own walls physically and connect directly to the to those as well. There was a couple of questions about things like compliance, data movement, and so on. Dataiku allows you to access data only as approved by your own internal IT teams. As mentioned, Dataiku can run inside of your own walls. There are no incremental security or compliance risks introduced by Dataiku because it obeys and respects all of the existing constraints and rules that you have in place. So if data is not supposed to be moved across from different locations, Dataiku will respect that. But, similarly, if you are allowed to access data but it's just in very disparate systems, Dataiku makes that very easy. So, essentially, you can imagine that whatever existing constraints you have that are appropriate and are based on the kind of fundamental compliance of your firm, those will not be altered. But, similarly, if the constraints are more along the lines of ease of access, straightforwardness of access, systems being disconnected inherently, that is something that Dataiku can immediately, help you solve. And we often see few teams gain a great deal of benefit from that. It also means, of course, that the IT and technology folks are very comfortable as well because the fundamental requirements, remain in place. Excellent. So some questions here yeah. Go ahead, David, please. I see a question, from Kia. Is it possible to create a planner agent using a larger LLM and executor agents? So, yes, it's possible in multiple different ways. So what I showed you during the demo is a visual agent where you, configure a single LLM. So you could think about this as the planner agent. So you could give it a a large LLM. And for each of the tools that might be executor agents, you give smaller LLMs. So this is one possible architecture. It's also possible if you have more complex ways of integrating your executors with your planner to build a completely custom code agent. So this is not something that I showed directly, but two of our agents in our architecture are code agents, because they they leverage more complex tools, that has not been packaged as visual, tools. And, you have complete flexibility, in the LLM that you are using and, in the kind of tools that you want to develop as long as, you know, there are Python, that that can be wrapped into a Python function. Absolutely. Also seeing some questions about, things like, hardware or access, to it, and indeed using it for other use cases. So, these are important questions. So Dataiku is something that your, team would bring into your organization. So it can be run entirely in the cloud on our own systems, but in the vast majority of financial services cases, that is not the arrangement that's used. Instead, Dataiku will be installed into your own private cloud or indeed on your own hardware on on premise. All of those arrangements are possible. And this is why we have so many clients working, with us who are in extremely large, complex, highly secured institutions, not just in banking and, by the way. So, around this, though, the important point is Dataiku is not simply used for this capability. Dataiku can be used for a vast array of use cases. We mentioned a few of them earlier, but keep in mind that Dataiku is used well beyond simply AML, and across the entire banking universe. We have clients using us for marketing and analytics, clients using us for financial planning, other clients using us for front office support and sales, as well as many operational tasks in the background. This is why if, when you saw what David was showing you, he showed you the front end, all of which is completely customizable by your teams. And when you went inside of the kind of meet and the components that made up, the solution, all of those are available for your teams to customize. They might be customized directly by yourselves using the no code capabilities or by the full code capabilities, or you might ask teams who support you to build components out or modify them to meet your needs. So take everything that you saw here as essentially kind of a Lego set that we've assembled in a particular way to showcase to you what could be done. You can add more components to it or take components away or rebuild those those components into something entirely new for you. Excellent. In terms of licensing and so on, the, most important thing there would be for you to, contact us at dataiku dot com, or you can reach out to myself. And I will put you directly in touch with our sales team, who will be able to talk to you all about what it would look like to trial Dataiku and then bring it inside of your organization. At the most basic level, I'll tell you that, the way Dataiku ultimately works is that we charge by, seats, like licenses, users. Therefore, you can build as much as you like, as complex as you like, a process inside of Dataiku, and all that you end up paying for is that you, and your team are accessing it. You don't pay for the amount of access or anything like that. So it's a very efficient, model, whereby you can use it, to achieve everything that you might want without, any kind of additional incremental cost. The cost is simply that you use and have access to the system. Marvelous. I know there's a few other oh, go ahead, David, please. Yeah. There there there is one about, our platform support choice of LLM models. I don't think it was answered before. So in, our example, we are using a mix of, of, Amazon and OpenAI models, but it's post well, we support using, what we call our LLM mesh. So it's our way of connecting to most of, the LLM provider. So we support most of them, including, one that you can self host, ones that you can get through, through APIs, in a very secure way. And included in our way of connecting to those LLM is a way of monitoring and, and, controlling access to this LLM for your teams. So you can give access to specific, models to certain groups, in, in your company. Plus, we have a way of monitoring the cost, the calls, limiting all of this, so that you really have control on the way these, these models are used, but still getting the flexibility of being able to connect to, all of them. Marvelous. Excellent. So, I'll begin to close-up the conversation here. I know there's still a few kind of open questions and so on. For all of those, I would encourage you to, travel to our website to follow the, links associated, with, this, webinar. You can also, as I said, reach out to myself, directly, and I'd be happy to put you in touch with, whoever, you'd like within our organization that can help move this forward. I think I'll emphasize, and I I can see the question here at the end, about, feedback and so on. I'll emphasize that what you saw here is very much the, experience of a a particular project in action. Remember that Dataiku one allows you to build many different projects, but also allows all of those projects to be governed. So if you imagine the use of Dataiku, it's gonna be yourselves using it. It would be teams that support you. It would be teams that support simply the underlying infrastructure of your entire technology stack, like your core IT technology people. And it would also be compliance, audit, and other teams, helping to ensure that everything that's being done is being done appropriately. All of these teams are able to work together in Dataiku in the various appropriate ways for their roles so that something built in Dataiku is not simply efficient and agile and effective, but it's also highly secure, very effectively governed, and fully auditable. So I'll leave you with that important point, that Dataiku is not only solving your problems, but ensuring that they are solved appropriately and effectively. So on that note, thank you so much for taking the time with us today. Any follow-up questions, including any questions that didn't get answered, feel free, as I said, to reach out directly to myself or to the broader Dataiku team. We'll be happy to engage with you. So thank you so much. Have a wonderful rest of your day. Thank you.