Video: Let’s Build: AI for Finance Teams | Duration: 2608s | Summary: Let’s Build: AI for Finance Teams | Chapters: Welcome and Introduction (4.08s), Leveraging GenAI Today (166.62001s), P&L Optimization Success (279.92s), AI in Finance (407.975s), AI-Enhanced Reconciliation Process (598.12s), Reconciliation Solution Demo (785.945s), Reconciliation Capabilities Explained (2141.725s), Current Dataiku Capabilities (2155.3s), SOX Compliance Features (2192.32s), Update Management Process (2268.4302s), PDF Processing Techniques (2324.33s), Conclusion and Q&A (2433.385s)
Transcript for "Let’s Build: AI for Finance Teams": Hi, everyone. Welcome to our Let's Build AI for Finance teams webinar. Delighted to have you all here, and I believe there's quite a few of you. It'll be, myself and Lea who will be walking you through these very practical uses of AI for finance teams that can be deployed today. We'll focus on one particular use case in detail, but also outline quite a few others that you'll likely find of interest. I myself, John McCambridge, the global director of financial services and finance teams solutions. Lea? And I'm Lea. I'm a senior data scientist, in the solution team at DataIQ. Thank you. So just to set the table very briefly about Dataiku itself, although as you'll see, what we're gonna be talking about is things you can achieve with Dataiku or potentially maybe to somewhat less effect outside of Dataiku. Dataiku itself is a universal, analytics modeling and agent platform, universal AI system. As you can see, there's a few critical components to it. One of the most important though is the collaborative aspect. We will connect to any data sources, any compute systems, any models that you have, and those could be rules based models or machine learning models, any, large language models that you need to leverage. And within Dataiku itself, you can have any, arrangement of players working on a project. You could have a bunch of finance analysts altogether working, a single analyst working solo, or a mix of analysts, business leads, and maybe a data scientist to support them, or even an ETL engineer backed up by an FP and A analyst backed up by an accountant, all inside the same project. And that's achieved in a variety of ways. One of the most important ones being that anyone can use the tooling that makes the most sense for them in their context. So if you need to do full code, that's possibility right alongside somebody who's doing no code at all, just visual clicking on recipes. And, of course, all of this done in a highly secure, well governed platform. This runs inside of your own organization. So you get all of the benefits that you should expect from a no code platform, for example, alongside all the power that you'd expect from a full code platform, all inside of a collaborative space. Now when it comes to AI for finance teams specifically, though, we wanna talk in real concrete terms here about what you can actually achieve today with the technology as it exists now. What you're gonna see here are not, concepts that you might do, down the line if things improve, for example. This is, the kind of work that can be achieved right now with a rather straight, forward arrangement. Somewhere where you store and compute your data, Dataiku to allow you to build and orchestrate all your processes, and then a connection to a large language model to run your generative AI components. That's really all you need. Yeah. That's easier said than done, but that is an achievable goal inside of firms today. And with Dataiku, it's easier to achieve that in a robust way. In terms of this kind of transformational impact, it's very grand. You know, you can have massive improvements in efficiency and speed, agility, and so on. But rather than kind of talk about these in abstract or even in concrete kind of terms, for specific clients, I'd like to talk about what the principles would look like for you in your own situation. The most important thing that I'll emphasize throughout this entire process is that you do not have to wait to use GenAI. Generative AI capabilities are available to you today if you have a platform that supports it, which, of course, Dataiku does. You don't need, for example, to have a bunch of machine learning models in place and then graduate to using GenAI. In fact, it is extremely common even today for firms and specifically finance teams to be leveraging generative AI to process data, to, manage reporting, to handle problems, and to have no machine learning. Right? They are using GenAI, large language models, but they don't have any machine learning in place. They may, in the future, decide to leverage that for their own needs, like forecasting, but they they don't have to have both, and you don't have to do it in some kind of order. In fact, the most common order for people today is that they do some very basic initial work on their analytics using Dataiku to arrange some data pipelines. Again, not getting things to a 100% perfection, just enough data in the right place that you can do something sensible with it. And then they immediately start unifying that data and leveraging Gen AI hot. And they can add a machine learning layer. So that's the most important strategic takeaway I wanna give you today. You don't have to wait. You can use Gen AI sensibly and effectively today, and you're gonna see very concrete ways in which that can happen. Now if we were to think about where we see our finance teams succeeding with Dataiku, there's a wide array of places in which they succeed, but P and L optimization is one of the most common. So they'll bring in all of their different systems and components. That's different data pipes, reconciliation challenges, maybe some process mining. Again, they'll be bringing in the intelligence behind their processes. There'll be rules based things, formulas, statistical analyses they're already performing. And if they want, some machine learning. And then they'll be scaling this out using things like document intelligence to process unstructured documents like a PDF, using GenAI to power insights effectively and then reviewing them, with a person and using agents to automate this process. Again, we're gonna show you some very concrete ways in which all of that actually manifests today. You'll see one of these in particular has been highlighted, reconciliation, because the heart of the demonstration today is gonna be based around that most common of challenges, effective reconciliation of data. However, you're going to see a classic reconciliation use case, the kind of thing you would have seen years ago with, you know, a data, process matching algorithms, a nice review dashboard, manual checks. And then you're gonna see how that can be massively enhanced using generative AI today. Of course, this wouldn't be, the whole of the kinds of things that you can do within Dataiku across your teams. Dataiku is an incredibly broad platform. It's incredibly powerful for finance teams, but also many other teams within your organization. Here alone, just within finance and its kind of, associated teams and groups, you can see a huge sweep of projects. Remember that Dataiku is not a point solution. It's the place that you build and succeed in your various workflows. And one of the most important parts of that is integration. Dataiku is collaborative, like I said before. Collaborative with your systems, collaborative across your people, but it's also collaborative across your use cases. Some of the most powerful work you can do is only possible by combining other distinct projects, unifying them as a foundation to build on top of. And that's one of the most natural things in the world to do with Dataiku. Now before we get to the reconciliation use case, which is gonna be the heart of the conversation here and and demonstrate it to you in detail, let me just quickly talk to you about two other obvious use cases you may want to have. Again, just to give you a sense of the sweep. Another really classic one, if, for example, you're very keen to go down this machine learning today path, or something you may have experimented with before and wanna revisit, would be financial forecasting. This could be for cash flows, like daily cash flows, or it could be weekly or monthly revenues or costs, for example. Again, within Dataiku, it's very straightforward to leverage our platform and indeed the prebuilt solutions that we can offer you, one of which you'll see later for reconciliation as well, to build out an incredibly powerful but very straightforward and easily interpreted model for predicting, your financial, forecasts. In particular, these models leverage a variety of different techniques, including very classic statistical techniques, things like autoregression, the kind of analysis you might have done for many years. And then alongside them allows you to build an experiment with simple machine learning models and critically allows you to evaluate them alongside one another and any historical processes you've used. So you can actually say to yourself, well, I could use a simple statistical model at the click of a button. Oh, look. That's pretty predictive. I could put a lot more effort into building a complex machine learning model with drivers and all sorts of interesting factors. But But then what I wanna do is compare the two and say, look, is it worth the extra effort? Is it worth having to explain the complexities of that machine learning model? And if it is, how can I explain them cleanly, effectively, and simply? All of those problems can be solved by building out this structure inside of Dataiku. So for those of you interested, we could always discuss in the future our, machine learning and classical statistics financial forecasting solution. Another completely distinct approach you might wanna take is a different agentic solution to something like receivables management. Here, we have a simple sketch of an agentic framework in which a agent is directly supporting a human agent handling receivables. And in this case, you're going to be leveraging variety of different systems that will be interconnected, a variety of existing models around, say, risk scoring or management of rules around who to contact and when, the interpretation perhaps of unstructured documents like invoices or vendor contracts, master services agreements, which might contain critical clauses, and unifying all of that into a chatbot interface with a robust dashboard on top of that for managing your individual outstanding, contracts, and the ability, most critically, for a human to leverage all of that to speed up their own work. You're not handing everything over to an AI agent and hoping for the best, hoping that it does what it's supposed to do and doesn't mess up or doesn't confuse matters. You have a human agent who is sitting alongside of that AI agent and is the final arbiter of what's gonna be done. It's all about accelerating, speeding up, making more efficient, taking away a lot of the grunt work with a manual plotting that that human agent had to do and allowing them to focus on the critical decisions that are actually the heart of their role. So those are two other examples along with the many others that you would have seen in the orbit before. But now let's turn to what you're gonna be seeing in-depth today. This is Jet AI enhanced reconciliation. We take one of the most classic challenges for any finance team, reconciliation processes, and we wanna show you both how you can achieve that today in Dataiku in the classic sense in which you have everything that you need, data inputs, matching, including fuzzy matching capabilities, rules, an audit log for everything that's been done, including how it was set up and what matches were handled, dashboards to monitor the results, and then a manual interface to resolve those pesky, unmatched results that need to be, intervened with by a human. That's the classic way to do reconciliation, and we show you exactly how to do it beautifully inside of Dataiku. But vitally, you're gonna see two additional components that wouldn't have been possible a few years ago, but are possible today. Those components are using a large language model to load in some of the critical data in the first place, to clean it and process it in intelligent ways using either deterministic formulas like you would have in the past or using the large language model itself to handle the data. You'll see how easy it would be to set that up and how doing that kind of cleaning beforehand can dramatically improve the reconciliation process itself. And then, vitally, you're going to see at the other end of the process in reconciliation, when you gathered all of those manual reconciliations, all those records that you have to go in and check by hand because they slipped through the existing rules, and you've likely sat and thought to yourself, I really wish that I could just know at the click of a button what are the implied rules that are actually being enacted here when my team manually resolves these. What I should be doing is figuring out, is there a way that I could improve the rules at the start of this process, the matching rules at the start, so that I'm not constantly having to handle these edge cases down at the end? Yeah. Maybe there's always gonna be a few edge cases, but surely it doesn't have to be this many. Surely, the percentage of edge cases should be falling as I continue to do this manual work. But in reality, that often doesn't happen. It's often confusing to handle. Maybe you don't have the right data to hand because you're not generating audit logs effectively. If you use Dataiku, you would, of course. Maybe it's just too much work and you're not sure about the return on investment for it. Or maybe the teams aren't really even sure how they would start. You'll see a perfect example of how GenAI can handle that problem using an agentic approach while keeping a human fully in the loop and allowing them to make the final decision about what changes to make upstream. So with that initial kind of setup of these fundamental reconciliation components in the middle supported on both sides by GenAI enhancement, I'll hand this over to Lea, who will walk us through this process in some detail. Everything you're going to see runs today inside of Dataiku and would be available for you as a Dataiku user. And you're gonna see, just as importantly, how the principles apply, how you can use GenAI in this sensible way to enhance and improve an existing challenging process. So with that, I'll hand it over to Lea. Thank you, John. Give me a few second to share my screen, and we should be good. Okay. So you should be seeing my screen now. We can see your flow. Yep. Perfect. So, the objective of today's, demo is to show you, as John mentioned, the reconciliation solution, which allows you to reconcile two datasets, that do not perfectly match. And, we will also deep dive into the Gen AI modules that comes after the solution. But before starting, the solution demo, I will give you a quick example of a very common data preparation challenge that we can face before doing reconciliation. So here you can see an example of unstructured data. So the text includes trade information, but our goal is to convert this text into well formatted columns. And using visual prompt recipes in Dataiku, we were able to go from the previous dataset, to this well formatted version. And in this example, what we want to do is to better clean the amount column, so that it only includes numerical values. So there are two easy ways to do this in that IQ. One way will be to create a preparation recipe and to generate steps using prompts. So here if I go over this little icon, you can see the prompts that were used to clean the data. And in that case, the results are are quite good except for this row, where the double m was not well, processed by by my steps. So there is another way which is more robust, in this case, where we're using a very simple prompt recipe. And in this prompt recipe, I am using, the amount column as an input, and I'm also including a few examples. I'm not including the examples with the double m because I want to see if my model is good enough to process this, on its own. And at the end, I should get, a perfectly formatted column. So all of this work perfectly. So now let's imagine that, your data is cleaned and you want to reconcile two datasets that do not perfectly match. So the reconciliation solution will offer you a range of matching capabilities, which includes perfect join for exact matches, fuzzy join based on custom criteria, and also access to a web application where you will be able to do manual matches. To use the solution, you will go to the project setup where the user can upload or connect to his datasets. And then it will be able to set up a bunch of parameters. Let me scroll down a bit. So, the most important part is to select the corresponding columns, which are referred to to as keys here. And so for each pair of keys, we need to set a matching distance threshold to define the acceptable distance, and the the acceptable level of dissimilarity between the values in the key columns. And this threshold will be used in the procedure itself. Then after that we can establish an automatic matching threshold and a maximum number of pending matches. So when the project is finished being built, we will see the results of the reconciliation in the dashboard. So the reconciliation analysis provides an overview of how records are categorized based on their alignment with the secondary datasets. The records may fall into one of several match types. So we have perfect, automatic, pending, and manual, or we also have no match. So if we were on the first run, we would see that we do not have any manual matches. This is because all the potential manual matches haven't been flagged manually yet and are still considered pending matches. The manual what matching web app allows us to manually approve or reject matches based on our own review. So you have two options. You can either go to, the focus mode where you will be able to see each, potential matches and you review them manually here, Or you can go to the full table and have all of those matches all at all at once. And here again, you will be able to match them manually on the right. After the review, in the manual matching web app, we will be able to see all those new matches in the dashboard. So here, that's how we will end up with manual matches there. Now let's imagine, my team and I have been using this web app for a week, and our system logged all the manual matches we approved or rejected. Ideally, we would love to know if we could deduce new matching rules from, how we used the web app. But it could be quite time consuming to do that manually and to try to figure out which type of pattern we followed in order to make a decision to match or reject matches. So this is why we added an optional GenAI module to the solution, which has not yet been, publicly released but will be available soon. So here you can see, the flows on where we built the old Gini components. We mainly used prompt recipes in order to to build them to build those. So as an input here, we have in this column yeah. Here. So we have the user's decision that has that are stored there with the actual value that was compared and the decision that was made. So in the first prompt recipe that we have in this flow, we are asking the model to review all those decisions and generate new rules. Then we have a second prompt recipe. And here we generate a potential Python function for each rule. So we end up with a textual rule and a function. So this is what we can see in this next dataset. You have the rule here which is textual data and then we have the function. But then but then to be able to review these rules, we will need to have access to a user interface. And to do so, we need some other steps in the flow. The first step will be here to convert the markdown, text that we have into HTML. And then the second step will be to, review, to add titles to all of the review. So I will take this prompt recipe to show you a good way to, iterate over different prompts in Dataiku. So here you can see the last prompt that I used, the final one that I picked. But before I decided to go with this prompt recipe, I first tried other options. And to try other options, I used the prompt studio in Dataiku. So if I click here, I'll be able to access the prompt studio. So you can see, on the left the other prompts that were tested. And for each, prompt that were tested, I was able to, add an input. So here it's the the review. And I'm also able to select, a sample on which I will be testing my prompts. So here, I'm testing the prompts, and I can directly see the results of it. And if I'm happy with the results, it, came with, I can export as a recipe. And by doing that, I will be able to run the prompt in the flow. So that is what is happening here. So in the end, I end up with this data set, which includes, the name and number of each review, the function that was deduced by one of my recipe, and the HTML content. So last, last step. We have now a nicely formatted user interface where we will be able to review all those, generic, roles. So here we have a drop down. In the drop down, we are able to select different review that was built by the the LLM. So if we go to review one, we have a first disclaimer to to make sure that we know that this was this rule was generated by an LLM. And then after that, we also have a warning. So this warning is there because we didn't have enough information to make a reliable rule. So in that case, we might not want to validate this role, and that's clearly emphasized by this report. So if we scroll down, we'll see that the role is not reliable and that we should not move forward with that. But, on the contrary, if we go to the review screen, then we have a rule without the one the warning disclaimer. And in that case, we can see that a clear pattern was detected. So this pattern is quite straightforward. It sees that every time, the country code has a digit at the beginning, then the rule was a proud was was a proud approved. When it did not, the rule the match was rejected. So we have the dispute rule, and we also have a Dataiku formula at the bottom. On the right, we have a summary of the status of those rules. So let's say, in this case, we were happy with the rule detected, so we decided to validate the rule. By doing so, we would see, this rule appearing appearing on, the right. And, this rule will be pushed into the flow, to the beginning of the reconciliation process so that the next time we'll be running the reconciliation process, we will be able to see, the automatically validated matches. Oops. Sorry. Yeah. So once the the the the role is pushed back into the beginning of the flow, we will have the new role included in the reconciliation process, and, we will not have to do it manually once again. The team will have less matches to, review. And that's it for the demo. So in the next, version of the reconciliation solution that will include this module, we will be using the agent, code tools, which are tools that, include code that were previously reviewed and approved by user. So that way, the code that will be used to do this new rule will not, be, unsecure. Thank you, Lea. So our turnouts to our slides here will finish on our delightful kind of slide here. So as, mentioned, we're delighted to answer any questions that you might have. Feel free to post them, into the chat. I'll emphasize a few of the critical points that were just made there, and then just to make sure that folks are, comfortable with them. So as you saw, within Dataiku, not only can you build these powerful capabilities, but you can build them in an incredibly controlled, auditable, and appropriate manner. This is one of the most important things. Dataiku not only gives a lot of power to people, but it also allows for very effective and definitive guardrails to be placed around that power. This is why it is respected and then, appreciated by both an IT or technology team that you work with, and with the users on the front line themselves within the finance teams. It allows you to bridge, what could often be a big gap there because both groups feel very satisfied with the controls and the capabilities that are in place. You'll also notice that Dataiku has baked right into it a lot of the best practices that you may, not even realize, when you initially start to work are the best practices, that need to be understood. So as you can see, there are things like, the ability to, use tools instead of simply generating code. Right? This is one of the most common, now known best practices when it comes to agentic use cases because it allows you to give those agents a great deal of power, but for that power to be very tightly constrained in a very set, fixed set of dimensions. And it resolves one of the most obvious immediate concerns that teams might have about leveraging an agent for this kind of use case. Additionally, as you'll see, Dataiku allows you to complete the sweep of a problem and also to break it into pieces. Some of the components that Lea showed at the start using GenAI recipes to process data, can be used for any type of project. They don't have to be used for reconciliation. You could imagine using that a thousand different ways to solve existing problems. And as you saw, you're not forced to use Gen AI all the time. You could just build them yourselves, manually using the existing visual recipes or with code. You could use the AI to assist you, but then you would be reviewing the final output of that AI and then deciding, yeah, I'm gonna use this fixed formula. So the AI sped you up, but then the result is a pure, formula that will always give you the exact same correct example every time just like it would have before in something like Excel. Or you can leverage a GenAI component to process the data continuously, and in many cases, solve problems that might be at the edges of your existing workflows. And in fact, you can combine them. It's not at all uncommon that you might solve the problem with the formula, simultaneously have a set of records being processed by the, GenAI recipe to check for gaps, join them together, and look if there are differences. And when there are differences, you resolve them by basically saying, ah, why is the GenAI capability generating this output and the formulas generating another? It often indicates some kind of gap in the coverage of your formula that allows you to uncover edge cases. And, again, that could be applicable across the board. And just as importantly, as you can see, there are loads of components here in order for this to be fully successful. It's not very helpful if you process data, perform some good fuzzy matching, but then you have to spit all your manual matches out into Excel. You use a huge lose a huge amount of the important value of a constrained audited process. Or similarly, it's not very helpful if you have to force all of your GenAI stuff into a completely external tool, one that perhaps likely doesn't even allow you to push records in row at a by time at a time. Maybe you have to just dump them all into a chat, try to give it some instructions, and hope that the answer you get out is useful, and then, I don't know, pull that, copy paste that back in or something. That's not an effective way to build a process and it can't be automated properly. So again, as you saw, using prompt recipes to process data continuously, row by row, is one of the most powerful capabilities you can use to leverage GMAI properly. It's also the kind of thing you can't get out of a chatbot. And so again, this is one of the most important ways to think about how you would be leveraging, Dataiku in the future. We have a bunch of great questions, that have been asked. Let me start tackling a few of them as I see here. I'll start with down at the bottom. I'll cover quickly a couple that were already asked, by the way, and answered. So the this solution, like any of the solutions built in Dataiku, are available to our users as it is and can be installed directly with the click of a button. And indeed, when you install them, they appear as if you yourselves had build them directly. There are no black boxes, no hidden components, no special components that only exist in that solution. They all are built out of capabilities that are inherent to Dataiku. So when you install a solution, it's fully available to you and, critically, it's completely customizable. You can rip parts of it out, add completely new components into it, or keep it as is. Whatever makes the most sense for your particular needs. And, of course, you can copy solutions, make iterations of them. So you can have one reconciliation process over here that does one thing, and then a variant of it over here with different rules and capabilities that will just run, happily and be fully automated. There's a few more great questions here. Information about privacy and the location of your data centers. Ah, a great question. So Dataiku does not themselves sell data centers or data, access to you. If you like, you can buy Dataiku as a SaaS product, in which case you do get access to our data centers, and, we can provide you with full detail. But in many cases, firms will deploy it inside of their own walls, often on their own private cloud. And in that case, and indeed even in the case of using a SaaS deployment, what you're often leveraging are your existing data centers, your existing compute. We have partnerships with all of the major, hyperscalers, GCP, Azure, and so on, as well as the the major kind of, aggregators in that space like Snowflake, and Databricks, as well as with providers of, AI services, like, for example, NVIDIA. So you are able to choose and mix and match your underlying technologies as you see fit, which is very important for finance team. You can also connect really classic systems like SharePoint, or FTP servers if you need to because sometimes the data sits there and hasn't been migrated over to a cloud yet. Another great question here about how long would the typical reconciliation use case design to deploy? It, of course, depends on the complexity of your use case. However, as Lea showed you there, if you have a simple reconciliation use case, you're matching two datasets against each other, you already have a sense of what rules you'd like to try out, and the data is relatively clean, it would be quite reasonable to expect that you could get that up and running in one day. You would literally click a button to install it. It would then install. That would take maybe thirty or sixty seconds. You would go in and configure the application, which as you saw there is very straightforward. Let's assume you already have your data connections in place and IT has approved them. You would line them up, process it through, and then click go, and it would run. And from there, you'd likely spend some hours tinkering with it, playing around with the, matching rules, experimenting with what the web application, and so on and so forth. But that fundamental capability would easily be, spun up within the first day. And indeed, it makes for a great evaluation of Dataiku. If you'd like to, that could be a great way to check out how Dataiku performs for you and get comfortable with the platform very, very quickly. There is a great question here about could we have a demo of the revenue and cost forecasting? We do have a little bit of time. It wasn't the original plan. We were just going to show you, the, reconciliation, solution. So what I'd encourage you there and maybe we can provide the links. We can show you where our full catalog lives so you can see all of these in action on our website. Again, to the same scope that you saw from Lea here. You can actually see them running with realistic data. It's all synthetic, of course. You could see how they produce outputs. You could see all of the pieces inside of them. And that will give you a great sense of how to understand, Dataiku and also let you see things like our financial forecasting solution and the many other solutions we have. We have dozens of them. You can also see process mining, another favorite of mine. So we'll make sure that that link gets pasted into the channel for you. In another great question. Inputs to the LOM model and auditability, and getting the same results with the same attributes. Indeed. Great question. So within Dataiku, every component that you pass into your large language model is fully customizable. So you have the same level of parameter control as you would if you were building an admin code. You, You, of course, can set things to default as you see fit. And so, there are gonna be things in the background that are changing without you knowing. There's no black boxes wrapped around any of the parameters that are being entered. You can always customize them and say, keep them at a fixed seed, for example. Additionally, Dataiku provides complete auditability tracing for all of your DNA interactions. So you have logs of all of the, questions and answers being sent back and forth and so on. You also have logged traces of the actual process that the LLM went through. So if you interact with Dataiku, and use it to provide any kind of GNAI output, a trace is generated for each of those queries showing how the LLM actually executed that task. And those are all saved then can be audited at will. They can also be used, by the way, to impose guardrails. So you can have live checking for things like improper commands, improper data inputs, and so on, and that can be processed live as the queries are being sent, to whatever your chosen, LLM is, which by the way could be an on prem LLM, a private cloud LLM, whatever the appropriate thing is for your company. Do you experiment with reconcili? Yes. Indeed. We have one to one as well as one to many reconciliation capabilities, a very reasonable question. And, if you'd like to discuss what that would look like, for your own particular need, we'd be happy to discuss it. And a great question as well. In terms of all of the capabilities that you saw exist in the latest version of Dataiku. What the version number, escapes me, 14 dot something. But, yes. All of these capabilities that Lea showed you are just running in the latest version of Dataiku that, will be available, to any user, today. Obviously, your company needs to be up to date with their installation, but there's nothing here that's, coming soon or anything like that. It's already fully available inside of the platform. Fantastic. So glad to be able to answer a bunch of those. I wonder if we have any others. Oh, there's an interesting question here that hasn't been posted around SOX compliance. So one of the interesting things that I'll tell you around SOX compliance is, and compliance generally, is a lot of these capabilities are baked right into Dataiku. So if you were to break down what your SOX compliance needs might look like, for example, you'd end up looking at a bunch of things like data control, for data edits, access controls, the ability to lock a project into a specific state and know if it's changed, the ability to govern what components of a project can be accessed by who, the ability to prove that a particular number was generated by a specific process and for that to be known historically as well, that you could look back over time and prove where that number came from. All of those capabilities are built right into Dataiku just by using it normally. In fact, many teams find that the kind of audit and compliance challenges they might have with other systems are essentially handled automatically by Dataiku. Of course, people still need to review audit logs, make sure that systems are backed up and things like that, but it's just classic hygiene in that case. You don't have to do special parallel work in order to ensure that you have very robust auditable trails of everything that's happened inside of Dataiku. Marvelous. So I'll just check here if we have any other questions floating around, maybe in the chat box. I see there's quite a lot going on in here. So the way our solutions work is we do not push updates to our solution out to customers in some kind of automatic manner, because we don't want them to be disrupted in the way they work. But the latest versions of a solution are always available online and can always be installed by a user. So you will be able to keep the version as it was installed by you. And then if you like, you can migrate over to a new version as you see fit. So we try to strike a careful balance there because, again, these projects are likely very, likely to have been customized even in small ways by your teams. So we don't wanna, interfere with that. And we really want them to be treated, as projects of your own that you can own and customize and be very confident. I do see one other question here about data that coming comes out of PDF. Absolutely. So Dataiku has built in OCR and GenAI multimodal, processing capabilities. So, if you have a PDF document, you can e easily use the traditional OCR techniques, things like Tesseract from Google, say, and apply those. But you can also, as Lea was showing, using those prompt recipes, you can process PDFs directly. So you can actually, use an LLM to process, your PDFs. Those are wildly effective nowadays. Using the latest generation large language models instead of OCR techniques is very common now and shows very significant uplift. So if you previously not experiment with OCR because maybe the technology seemed a little bit complex to use or you just didn't like the the the results just didn't seem to be sufficient, retry those use cases now with the latest generation LLM. You are likely to be very impressed by the outputs. So that's, again, just kind of a a fact, that I can share, the use of a multimodal LLM and what all the latest gens are. Multimodal is a very, very effective way to process and interpret documents. Once you process them, by the way, you're gonna need to then use them in an effective manner. The most common way to process a document like that would be using a knowledge bank. Again, in Dataiku, that's just a click away. You can have a a little bit of, a couple of steps to process the documents and then another step to spin up the knowledge bank where those documents will be embedded and and then be queried. So all of that processing a PDF, setting it in knowledge bank, and then querying it in some way, maybe using a chatbot or something else, those would all just be three or four steps in Dataiku and could be entirely visual. You wouldn't need to use any code of any kind to handle that. Marvelous. So I think we're, you know, getting close to our forty five minutes. We've got a little bit of extra time. Anybody has any questions that they'd like to ask, I'm delighted to answer them, but it looks like we've been able to tackle some of the most important ones. If there are any of these solutions in the catalog that Marissa, posted the link to here that are interesting to you, Feel very free to reach out, using the the contact information here or with any other contacts you have, from follow ups. We'd be delighted to, showcase those to you. And, of course, as I said, you can also see them yourself if you like, and decide if it's kinda, for you and if you wanna move forward. Marvelous. Excellent. So it looks like we've come to the close of the questions. I'll give a moment if anybody wants to get something in the last minute. No. Excellent. So do please feel to re feel free to reach out to us. The solution that you saw today is readily available. Oh, last question. You absolutely can read information, from emails as a source. As I said, Delek was highly collaborative with different sourcing systems. So not only can you read, email information. Now, of course, that will have to be set up by your IT team. There will have to be the appropriate connection set up. But, yes, connecting to, say, an exchange server to read mail is very common as a use case and obviously comes with very clear controls. By the way, you can also use Dataiku to send emails, Teams messages, Slack messages. Any project in Dataiku can be automated, and one of the most common things that you'll want to do with an automated project is to send a message of some kind if the project results in some outcome. So you could have an automatic deterministic messaging system, like, oh, if this number looks like this, send an email to this person or Slack this person. Or you could have an agent make that decision for you. If the numbers look bad in some kind of more general way that the agent can decide for you, elevate that to somebody using Slack or, Teams or some other system. Great. I'm glad glad to have been asked that. It's one of the important integrations that's worth mentioning. Marvelous. Alright. Like I said, please feel free to go to the catalog and see all the views in action and know that everything you've seen here is extremely usable today. Nothing that's been shown is, you know, something that will maybe work down the line or something like that. Every example here works today. All you need is a decent large language model, a lot of them available, a somewhere where your data can sit and be processed on. There's gotta be something like that sitting around even if it's some old school SQL Server or a fancy cloud, whatever it is. And then you need a place to do all of this work, and I would recommend that you do it in Dataiku. Marvelous. Alright. Thank you all so much for your time, and we're delighted to answer any follow-up questions you have and work with you on any of these projects and others that you might find interesting. Alright. Thank you so much. Take care. Thank you.