Service X-Factor

Demystifying AI: Beyond the Hype with Sahib Sawhney

Scott LeFante Season 1 Episode 4

Looking past the AI hype cycle can be challenging when tech influencers and executives make bold claims about artificial general intelligence revolutionizing industries overnight. Sahib Sawhney, our expert guest with four years of dedicated AI experience, offers a refreshing reality check on what today's AI systems can and cannot do.

"These models are not actually reasoning," Sahib explains, revealing that even the most advanced language models are fundamentally pattern recognition systems, not thinking machines. While they can process vast amounts of information and generate convincing outputs, they lack true understanding. This distinction becomes crucial when organizations consider how AI fits into their technology strategy.

The conversation delves into practical implementation through Message Communication Protocol (MCP) servers, which Sahib describes as "the USB-C of LLMs." Unlike traditional APIs requiring precise programming, MCP provides a standardized way for language models to communicate with various tools and data sources through natural language. This creates remarkable efficiency, allowing organizations to connect multiple AI systems to various business applications through a single, unified interface.

Security and governance emerge as essential considerations, with Sahib offering practical advice for securing AI implementations using tools like Azure Entra ID, API gateways, and security certificates. The discussion highlights a critical approach: understand your governance strategy first, then implement AI solutions accordingly, rather than rushing to adopt technology without proper security measures.

Whether you're considering implementing AI in your organization or simply want to understand what's behind the hype, this episode provides the clarity and practical knowledge needed to approach AI strategically. Listen now to gain insights from someone working at the cutting edge of AI implementation, and prepare your organization for the real opportunities this technology presents.

Send us a text

Speaker 1:

Welcome everyone to another episode of the Service X Factor podcast. I am one of your hosts, scott LaFonte, and here, of course, with my esteemed colleague Quad Quad. How's it going, man?

Speaker 2:

What's going on? Everyone, how are you doing?

Speaker 1:

Good man. We're in the heat of the summer. Things are spicing up a little bit around here, but everything is good and I am excited about our guest, aren't you? I am absolutely and since you know our next guest personally, I'm going to pass the honors over to you to introduce our next amazing guest.

Speaker 2:

All right. So we were chewing on this. We couldn't this topic of AI. We couldn't think of anyone better suited. So I had to reach out and call someone who knows a little bit about the subject. This gentleman goes by Sahib Swani. Sahib, please introduce yourself, my brother.

Speaker 3:

Thanks. Hey everyone, how's it going? Salve here yeah, just been in the AI space for you know, I want to say four years now. Before that was data science, so created a financial education company with my brother. So we were doing data science and I was building all those databases and stuff for his first company there. Before that I was a tax consultant. So everybody loves taxes I know Riveting job for sure. But I went to school for computer science, so then I was like might as well use that degree, so I switched over and, yeah, it's been on the up ever since.

Speaker 1:

Modest, I love it, love it. It's great to have you on the podcast here. We talk a lot about service and AI and AI being such a hot topic, and I think a lot of our guests understand somewhat about what AI is, but it's evolving so much and so fast that you know. One of the things that say I'd like to know is I mean, how do you stay on top of everything, considering it is evolving so quickly?

Speaker 3:

Yeah, for sure. I mean, one of the things I definitely focus on is to ignore the hype. You know there's a lot of those influencer hype people out there that like to, you know, inflate the capabilities of what's currently out there. So one thing you need to focus on is that sure, there are new things happening, new models, llms are getting bigger and better, et cetera, et cetera, but at the end of the day, these are incremental increases. All they're doing is just, especially when it comes to these like reasoning models I mean, we just saw that article that came out recently from Apple I believe it was called like the Art of Reasoning or something like that where you had these models that they tested.

Speaker 3:

That were reasoning models, where they tested like these complex problems on like the Tower of Hanoi problem, which is like a complex reasoning problem, and at the lower difficulty you had standard models beating the reasoning models because the reasoning kind of functionality that they put in was tripping up these models. Then you had the. Then, of course, at the medium and higher levels, they were doing a bit better the reasoning models. However, they were still not solving medium and higher levels. They were doing a bit better the reasoning models. However, they were still not solving the issue, even when they were given these algorithms to actually solve the problem.

Speaker 3:

So what we take from that article essentially is that these models are not actually reasoning. What they're doing is they're just large pattern recognition models given the facade of reasoning. So they're just like things out there that you need to realize that when people say AGI is coming in five or one to two years and people like Dario Amadei are saying 50% of white collar jobs are going to be destroyed in the next two years or so, you just got to ignore the hype. We're nowhere even close to AGI. So, when it comes to you know, handling the rapid evolvement of AI, you just need to realize that, hey, sure, these are incremental increases and they have these like certain protocols out there that we need to stay on top of, like agent-to-agent protocol, mcp protocol, but these are just incremental increases and you have some time to understand them.

Speaker 2:

All right, so that's awesome news, right. But one might heard what you just said and took offense to it. They're like wait a minute, are these guys literally telling us not to buy into the hype? So let me ask you a real quick question, because I know where you're coming from. But for our audience's sake I have to ask can you tell us exactly how AI works? And if you think if you tell us, then that'll help us kind of narrow down the hype and we can kind of fit it into where it works for our organizations, can you kind of explain to how it really works for others?

Speaker 3:

Yeah, I mean so in terms of these LLMs. Right, so these are just large pattern recognition models. So, let's say, when it comes to something like, let's say, software development, these models, when they say software is being developed by these models, where, like, 30% of our software is being developed, this is not like new algorithms that are being generated. These are boilerplate templates, like you know, when you're creating a Visual Studio. You know, when you go to Visual Studio Code and you ask it to create a template for I don't know an Azure function that's wrapped in an HTTP request and gives you a template. That's essentially what these LLMs are doing. They're trained on prior code that's been developed. It's nothing new. Prior code that's been developed. It's nothing new.

Speaker 3:

So it's not like these models can come up with novel ideas that could solve the I don't know solve cancer or something right, because the data that it's trained on is just the data it's trained on. It can't actually think so. The way that the LLMs work is that it has this large repository of data and that's why, when these guys keep increasing the size of their models, that means that it can handle more data. It understands things better, it has more knowledge. So when you ask a question, it essentially can understand how to do things a bit better because it has more data to look at. But when it comes to AI, right now it's largely, like I said, pattern recognition, and that's literally all it's doing right now.

Speaker 2:

Pattern recognition and predictive responses correct.

Speaker 3:

Essentially yep exactly.

Speaker 2:

Yep. So, wow, man, these guys are just totally destroying AI right now. No, no, no, no, no. What we're doing is we're setting the bar, because as much as we're setting expectations with our customers, we're also helping them to align to where it fits the most in their organizations. Now, you and I had a real, real heart to heart. One day I called you up and I was like I said you know, said I'm sorry, mcp servers are cute, they sound cool, but, brother, I don't, I don't understand why can I just, you know, stick with my apis. We had a real blunt conversation, so we're gonna, we're gonna repeat this conversation, just minus the uh, the explicit uh language that you use with me, because I've never seen anything bad, right you mind.

Speaker 2:

helping the audience understand what exactly are MCP servers, because Microsoft has dropped it, but we know they've been out for a minute. So there's like Anthropix MCPs. There's Microsoft's MCP servers, you know. Let's just talk about what this is actually used for and then, if you can get into, why is it better than just using regular plain old APIs?

Speaker 3:

Yeah for sure. So MCP essentially is a protocol that Anthropic came out with that allows users to connect these LLMs to tools, functions, etc, etc. And basically just increases the abilities of these LLMs. Right now, all they had or prior to this MCP, all they had was the knobs that they were trained on. So you would go to something like ChatGPT and ask like, hey, how do I I don't know change my tire? And it would go to this repository of all the data that it's trained on, pull the information on how to fix your tire and then spit that back out to you.

Speaker 3:

Now, when it comes to MCP versus API, people say like, hey, mcp is like a you know. Oh, it's just a wrapper for APIs, so why wouldn't I just use an API? The thing about MCP is that it's a standard protocol for literally all LLMs, right? So if an LLM can understand MCP, it can talk to your MCP server that you created. So an MCP server has multiple, basically objects that you can stand in it. So you can stand up, basically just information, getting tools, tools that actually do stuff like essentially post requests for an API, and then you can run functions and things like scripts with the MCP server. Now that you have an MCP server and then you have multiple LLMs that can talk that one language.

Speaker 3:

Now if you put in whatever tools you want get requests to field service, get requests to F&O, get requests to I don't know Salesforce whatever tool you're trying to use that has an API.

Speaker 3:

If you put that into an MCP server, now every single LLM ever that can talk to MCP can talk to that data source. So let's say you have multiple agents that might run on different platforms and you want to get to the same data, all you have to do is spin up one server. You don't have to define these API calls for every single LLM separately because they might handle APIs differently. Now that there's one standard protocol that can understand these APIs, all your LLMs that you're using can talk to it. And what's also cool is that the way you define these APIs tools, functions, et cetera inside your MCP server, you give descriptions, you give parameters, et cetera, like you would naturally with like a custom connector and power platform. But now the LLM you can talk to a natural language and with your query it can understand, pick and choose the exact values that you have in that just standard language query and put that into the MCP server and recognize what tool you're trying to call.

Speaker 2:

Exactly, exactly, exactly. And that's where you had me, because with APIs, you know I would have to programmatically develop that functionality if I needed it to recognize what items were in that prompt and knowing which API to call to get a real response from MCP basically allows the LLM to determine which is the best data source to provide the best type of response correct.

Speaker 3:

Right, yeah, people call MCP what the USB-C of LLMs. Oh no, you know how I feel about that. Like a universal plug, right? So one of the cool things about MCP is that it uses for LLM, since they're such amazing at pattern recognition, they have natural language processing within it so they can understand, based on the information that it has within the MCP server and what you type, what you're trying to say, and it'll pick the correct tool.

Speaker 2:

Nice, nice.

Speaker 1:

So would you say that's one of the biggest advantages to the MCP, or, you know, are there other areas that you would say, like you know, that give us a little bit of a competitive edge when it comes to AI?

Speaker 3:

I mean I'd say that's probably the biggest advantage the fact that it's like this standard protocol that you can talk to in natural language. You don't need to define the parameters. Let's say you're trying to make a flow on some tool and you're asking a question and then in that question you need to define a variable and then take what the user said and put it to that variable and then create the object and then send it to the API. So that's definitely the advantage of MCP, though I'm not saying that there aren't disadvantages. Obviously there are some disadvantages.

Speaker 3:

Right now I do think that MCP is still in its infancy. There can definitely be security improvements which you know people like Microsoft have tried to mitigate. You can spin up your MCP server on an app service and then wrap that app service in an API and then essentially within. So Microsoft does stuff a bit differently when it comes to MCP, where you need to create a YAML code, like you would when you're creating a custom connector and then put that into the custom connector area within the Power Platform, and then spin it up and say like, hey, this also requires an API there. Sure, you have some security there, but in general there's no real robust security, because when you're calling that MCP server, when you spin up, let's say, fno environment, you're putting your client ID secret, et cetera, within the environment variables anyways.

Speaker 3:

So when an LLM calls that single MCP endpoint, if somebody else has that URL or that constant connector and they connect to it, then they can start talking their data, which isn't necessarily a great thing, right, but obviously you're, you're probably not giving your URL et cetera out, but they're just things that you need to consider when it comes to governance, things like that. So, yeah, there's, there's certain disadvantages, sure, but there are ways to get around it. But, like I said, mcp is still in its infancy and it's definitely going to get better, for sure. We've seen how like you said, will, how LLMs are getting better and better.

Speaker 2:

All right, so I'm going to ask you a few questions, and this is an area that you know is near and dear to my heart, right? So thanks for the explanation.

Speaker 2:

We get it so like if you want to build like a field service solution, like remember when you and I had this convo? I'm going to let you get the verbs and the words in. If you wanted to build a real field service solution, leveraging the MCP servers, you can do that right. You can have it check knowledge articles, previous work orders to get more information, inventory levels, alert technicians. It can do all these actions by leveraging MCP servers, right, but it's just one source, not one source, but one MCP server, correct.

Speaker 3:

Right, exactly. And the cool thing about this is that you can spin up one MCP server, right? And let's say you want to have multiple softwares, multiple APIs within that MCP server. You can definitely do that. So you're not going to have a custom connector for each software, right? You're not going to have one MCP server. You can definitely do that. So you're not going to have a custom connector for each software, right? You're not going to have one custom connector for field servers, one custom connector for Salesforce. If you're using Salesforce for some reason, we can cut that out, Just kidding.

Speaker 2:

Right.

Speaker 3:

But you know, yeah, you can put all these different softwares in there and now you just have one server where all your elements could talk to. And also, if you want to use that MCP server to do the tool callings let's say you're using Salesforce and then you're using FNO you can pull data from Salesforce into your LLM. The LLM can understand what you're trying to say and then you can do. You can talk to the same MCP server and then do a POST request to FNO to transfer data from that Salesforce to FNO straight from an LLM, which is nice.

Speaker 2:

It is more powerful than just a regular api. Um, so, yeah, you won that argument. You won that argument. It was cool. It was cool. It was a fun conversation. Why do you mean you? I lost an argument.

Speaker 2:

I will publicly acknowledge that he owned me during that conversation. Um, there was a lot of uh, no, wait will, but why wouldn't you do this instead? And why wouldn't you do that? And I, and why wouldn't you do that? And I'm like God, I hate it when he does this. So, it was all good, it was all fun, but hey, I have to ask you because you mentioned it, we got to go with the G word here Governance, big deal, ai for governance. Can you tell us what your thoughts are? Because technically, it feels like. I mean, I know there's capabilities that you could build into it with Azure, ai Foundry and other areas. I'll let you I'm not going to people don't tune in to hear me speak. They hear our guests speak, so I'll let you talk more to that. But governance, securing data and making sure our data sources aren't inadvertently exposing information that it shouldn't. You want to talk to us about that a little bit?

Speaker 3:

Yeah, I mean, there's obviously people like Microsoft and other companies have been obviously trying to fix these issues. They've created manual authentication for Copilot things like that. Sure, it's built on Entra, so technically, if somebody tries accessing your endpoint things like that, they wouldn't be able to unless they're credentialed. But there's definitely real risk around AI and unfortunately there's, I mean, anything really, right, you see these big companies having these big data breaches and they're not even using LLMs, right, like we had. I think it was recently where, where I think it said 70 million passwords or something or account details were released into the dark web, essentially from people like Google, facebook, youtube, et cetera. Youtube is part of Google, but sure, we're always going to have these kind of risks when it comes to new technologies. Right, it's a new, ever-evolving landscape of technology and, like we said on this pod, it's ever-growing.

Speaker 3:

Multiple protocols are coming out, but what I'll say and what I'll preface this with is not to jump directly on the bandwagon and first understand how the tech works. Like MCP, like I said, you might want to take me up on that offer and wrap it in like an API. I'll say do that, Do that right, go to Azure, spin up your MCP server, wrap it in an API, add a key to it. That adds a little bit of, you know, extra security, right? And let's say you're doing a code pilot Sure, it might be in Teams. Maybe you want to add another authentication layer on it. Maybe you want to connect it to I don't know, like a what is that called Vileo, zileo, whatever the messaging app? Send a code to it, authenticate that way, maybe add another layer of security on it. But yeah, I mean these tools are constantly evolving, but make sure you are understanding the governance around it.

Speaker 3:

It's only recently that Microsoft started going super, super hard on the security aspect. Because, you know, sure, microsoft's a great company, a lot of us love it, microsoft's a great company, a lot of us love it. But there are certain issues where things like Microsoft Copilot had had prompt engineering issues, prompt injection issues, where people have actually gone and pulled data from these companies. And at one point there was a repository of Fortune 500 companies' data that somebody had put on and sent to Microsoft and other people saying like, hey, this is a real issue, why don't you fix it? And it's these people who have allowed Microsoft to kind of understand where these data breaches are happening and they've plugged these issues.

Speaker 3:

Sure, there are still issues happening and Microsoft constantly evolving. Like any other company, you have white headers that are constantly hacking things and then sending that information to these companies to fix. But, just like anything, just understand the governance behind it. Understand that it might not have the best security. Before you actually implement something, first understand how to secure it, just like anything else, and then you'll be on your way.

Speaker 2:

So, like, let's just stick with good, old-fashioned, like rag for grounding, right? I think Azure AI has an index filter. If I remember correctly, Azure AI Foundry probably has an index filter for security. There are different methods and tools we can use for, like, let's just get away from MCP for a second. I know I said a bad thing, but you want to speak to some of the technologies or tools that are available to kind of securing your data, Because I mean at the root right.

Speaker 2:

you take your data, you kind of expose it to help ground your LLM. What are some tools that are available to help secure the grounding? Let's put it that way Right.

Speaker 3:

I mean there's definitely multiple tools available. Obviously, microsoft, they have their Entry ID, formerly known as Adder ID. You can allow your authentication, your data, to only be sent to authenticated users, especially in CodePilot Studio. When you're, let's say, you have SharePoint connected to it, that CodePilot can only bring back data that you can manually and physically get when you log into SharePoint. So that's obviously one of the good things about it. For sure, it's connected to Entra. So all the enterprise-grade security that Microsoft is constantly adding to it allows you to only ground your data and allow authenticate users to get it.

Speaker 3:

There's also other tools, like APIs. You can use the Azure API Gateway to wrap your data endpoint. Let's say you're trying to talk to a blob storage and to get your LLM to be able to talk to it. You're sending an HTTP request to that blob storage and you're doing a um to get your lm to be able to talk to it. You're sending an http request to that blob storage, that blob storage endpoint. You can then again, sure you can use entra, but then for another layer of security, you can use the, the api gateway, uh, to wrap it and and talk to that data um.

Speaker 3:

There's also other tools integrated into things like Azure AI Search. You can add another layer of security on it. But yeah, I mean there's multiple different things you can do. You can use security certificates. Those are just another layer of security you can add, kind of just like an API. It's funny I had a friend colleague trying to implement certificates in Power Automate because the service that they were using does not use Entra, so they need to use a certificate and client ID et cetera, et cetera, and send that to the endpoint and pull data back. But what they didn't know was that within Power Automate that certificate option doesn't actually work. It's just there.

Speaker 2:

It's not there. It's not for those connectors, it's not.

Speaker 3:

It's not.

Speaker 2:

You can't, you can't. That's why my eyes you can't see us on video, folks. So thank God, you wouldn't have seen the expression on my face. Yeah, go ahead. Sorry about that.

Speaker 3:

So he was trying for like three days and I'm like, oh yeah, I'm sorry, yeah, that doesn't actually work, so you got to use something like an actual Azure infrastructure to do those calls. But yeah, I mean there's multiple different things you can do for sure, and Microsoft's constantly upgrading Purview to understand who's accessing your data, things like that so you'll be able to understand who's actually hitting those endpoints, things like that. So, yeah, I mean there's lots of tools out there. Like I said, I would suggest doing your research before jumping fully into the AI space.

Speaker 3:

That's one of the things I like to suggest to, let's say, when I'm doing stuff for clients definitely first understand the first. Do, like your AI strategy work. That includes the governance. Understand what sources you want to use. Then, based on those sources, understand how to protect those sources when you're trying to connect it to LLMs, your co-pilot integrated dashboards, because co-pilot is everywhere now. And, yeah, just understand how to secure that data for sure. I mean that's something you want to do for everything, right, anything you're trying to do, anything you're trying to use to connect to other sources to get your data. Always first understand your governance.

Speaker 1:

Yeah.

Speaker 2:

Governance, first what?

Speaker 1:

would you say you know in terms of what you've seen, you know common pitfalls companies are experiencing when they're trying to implement AI. Let's just say, right, they don't have a have a strategy, or maybe they even do have one, but what are you seeing as some of the common pitfalls and challenges that companies are facing?

Speaker 3:

I think one of the biggest issues I've seen is clients are misunderstanding the complexity of Copilot. Sure, it's a low-code AI platform, but there are certain nuances that they need to first understand. And you know, I know everybody hates reading documentation, but that's something that when you're trying to use a new technology sure, videos and et cetera you can handle all the videos. Lisa Crosby and all these people have wonderful videos on Copilot and other AI tools but what they don't understand is that there are these governance issues. What they don't understand is that there are these governance issues. There are these certain nuances that they need to first understand before trying to implement these tools. Now, sometimes companies are, let's say, they're talking to somebody about their AI strategy or they're talking to a company to kind of get somebody to implement this for them because they've tried it before. I would suggest that these companies, before trying to do something like AI, when they don't fully understand something I don't know, get like an expert. They don't need to do the full implementation for you. Ask them to do one use case, right, take one use case, show you the governance side of everything and then basically tell you everything about that one platform that you're trying to use and then, sure, now you can have them do staff augmentation, train your own people to do the rest of the implementation. You don't need to use that company to do everything, but at least get an expert to tell you, because that expert literally does that for their job. And technology consulting is a little bit different than, let's say, management consulting, stuff like that, where technology consultants actually show you how to do things and actually build the softwares that you're trying to use. So I would suggest getting like an implementer or another person that actually understands this software, because obviously this is the future and it keeps constantly growing and then understand the tool first and then, sure, do whatever you want.

Speaker 3:

But I've seen clients putting creating these tools in their default environments and going why isn't this working? You guys, you guys don't don't see video, but will's eyes is through three times, three times. But uh, I mean there are. There are certain issues where clients try to add PDFs as grounding data for Copilot Studio in SharePoint, but they don't understand that Copilot can't index pictures within SharePoint. Yet they're trying to do things that might allow you to do it, but then what I end up having to do is then, hey, telling them you can either convert these PDFs to full-text documents because Copilot just uses the standard SharePoint Microsoft Search graph indexing that's built in, or you can upload those documents directly to Copilot because that Copilot has a better indexer and if you upload the documents directly to Copilot it can read images within PDFs.

Speaker 2:

It takes forever, right? True, not a shot, just the reality, folks. Just setting it Just so you know. If you upload it to Copilot, it's not going to index in 20 seconds. All right, it takes a little bit of time, folks, so just be prepared. Just throwing that out there, go ahead, steve, my bad.

Speaker 3:

I mean it just shows you how good that indexer is right.

Speaker 2:

No, it's doing a good job, it's doing a good job.

Speaker 3:

But yeah, I mean, there are just certain things that and it's weird thing is. So let me backtrack a bit here. So we were at this client and they were having problems with their PDFs et cetera, and they were saying, oh, it's not working. And they had another implementer come in, or he called, I guess they said he was a copilot expert and he told them that all they had to do was convert these PDFs into text documents, just plain text documents. And I'm just like, okay, well, if this expert had just gone to the documentation and looked up the knowledge sources that were able to be used by copilot. Not only can PDFs be used within SharePoint, pdfs can also be directly uploaded.

Speaker 2:

So where did the so-called expert come from? We call what the kids say. We say CAP right.

Speaker 3:

Yeah, CAP. So when it comes to these pitfalls with AI, I would suggest that, since it's so new you before you let them tell you how to do things 100%.

Speaker 2:

So I have to ask a question and you and I this is something that we actually aligned on, but I think it's fair for the audience to hear it right we align on a lot, but I think this is when I first started working with you. I was like man, you know what? I got to give him a real cool name. It's Mr AI Now. So can you help our audience understand the difference between AI and automation and I'm throwing this up there for you, so that means that you can lead into agents, as it goes in there.

Speaker 2:

Can you help them understand obviously AI and automation.

Speaker 3:

Yeah for sure. So automation obviously is your tools like Logic Apps and Power Automate and 8N, et cetera. All those tools out there Zapier are basically flows and tools that allow you to basically allow a process to be handled by something else, like a repetitive process. Right, let's say, every day you're going to I don't know, f&o and then pulling data out of data management framework and then importing that data or have a data pipeline and sending that data to like a Power BI dashboard and then you're I don't know doing a bunch of filters on your data to make that graph, all pretty things like that. Now, with automation, you can automate that process with, like HTTP requests et cetera, et cetera, to pull that data and then do all of that filtering automatically with LogicGas, power Automate, things like that.

Speaker 3:

Now, when it comes to AI, this is where I like to make a distinction with things like power virtual agents, because essentially, power virtual agents weren't technically AI, they were just automation tools that could happen to talk because people had programmed things into it. But the second, the reason why it switched to something like being called co-pilot, was because now the underlying engine within Power Virtual Agents could understand natural language. So when people talked to the bot and they had certain spelling mistakes. Now, through natural language, that bot could understand what you're talking about, because the underlying model behind Power Virtual Agents now at Copa Studio could understand what you're saying. So the fact that we included something into that layer that could understand basically natural language through pattern recognition, because the underlying model was built on natural language, it evolved from automation, essentially, to something like AI. Now, I wouldn't me personally, it's my opinion I don't technically consider LLMs AI. Sure, the technology is technically AI, but I don't consider it AI.

Speaker 2:

Oh, you're starting World War III. Man, You're starting.

Speaker 3:

Oh boy, Now World War III, man You're starting oh boy.

Speaker 3:

Now, with natural language, we went from automation to AI. So now we have this automation platform that can understand natural language text model. Now you have something that also evolved into agents, because now you have a natural language tool that is also connected to automation that can understand natural language queries from a user or from an input, understand natural language queries from a user or from an input. So now you have these automated agents with these triggers that, let's say, just like a power automate flow.

Speaker 3:

If an email comes, the power automate flow would index that email, pull the appropriate values and then create like an object and do whatever you want, whatever someone in one with it. But now an agent can get that same email through natural language processing without you having to do a complicated power automation. It can automatically understand what that email is saying and then pull those values out automatically and then, through natural language, understand what tools to call through whatever description you have on that tool and then do whatever you want. So now not only so we went from automation that you had to program and in the and create these variables, etc. Etc. To a natural language processing that you could talk to and and call tools, to now something like an agent that could understand context of the data that it was given and then understand the appropriate tools to call to process that data without much human input. Now, I don't trust an agent to do everything autonomously fully if it's something complex.

Speaker 2:

That's why we have agents Got to have, qa Got to have.

Speaker 3:

QA. That's how you have it. Yeah, exactly, and that's why you also have human in the loop for AI agents. So you have an agent that does all this stuff and then, before it actually sends that data and actually puts it into your F&O system or wherever you're trying to put it in, you have a human review it. But obviously that takes a lot less time than going to the actual data source or cleaning the data yourself and then doing all the automation, et cetera yourself. But now you just have an agent doing it, but all you have to do is review what it's gonna do and then hit accept. And now you have something that might've taken days or I don't know weeks, cut down to hours or days.

Speaker 2:

I love it. I love it. You know what we got to do a part two to this man. But we got to have you jump into machine learning versus AI, because that's my area. I love talking about that, but let's give our audience a little bit more about you. Where can you be reached at and where's your blog and where's all that good fun stuff at? And, if I'm not mistaken, don't you have some speaking sessions coming up? Hint, hint, wink, wink.

Speaker 3:

I do. I do so. There are multiple ways you can reach me. Obviously LinkedIn, because that's where everybody is, but I also have a blog called groundtocloudai, because you know I love AI. That was not a cheap domain to get.

Speaker 1:

I can only imagine how much that would be. You're committed.

Speaker 2:

You're committed to your brand.

Speaker 3:

We love it.

Speaker 1:

That's awesome man.

Speaker 3:

Before I continue, is that that ai domain was actually owned by a country called Anguilla, and so this company has been making millions and millions and millions of dollars a year selling these domains to people. So I was like, hey, good for this, good for their economy. Sure, I love pirates. But then also I do have a podcast where I break down AI headlines, ai topics, just for the general audience. Sure, tech people can view it too, but it's called Tech for Thought. You can reach us on Buzzsprout et cetera, youtube, spotify, all the places that you can look at other podcasts like this one, this great podcast here. And then I do have some speaking sessions at the Community Summit in Orlando at the Gaylord Resort. So I will be talking about AI agents, I'll be talking about governance for Copilot, I'll be talking about how to do multi-agent frameworks using flows, and then there's also a private preview out there for actual multi-agents built into CodeBot Studio itself. So I'll be talking about that there as well.

Speaker 1:

That's awesome yeah.

Speaker 3:

And we'll definitely definitely we'll.

Speaker 1:

we'll make some plugs into the just so everyone can can reach out. Make sure that in the description that we send out all the links to all your sites bit from sitting in and attending those as well, but we definitely have to have you back on to have, I think, a part two on this discussion.

Speaker 3:

For sure. I mean, I love being here and had a great conversation, so I'd love to be back.

Speaker 2:

And we're going to do it. We're going to get a whole group on there. We're going to make a panel. It's love to be back, and we're going to do it. We're going to get a whole group on there. We're going to make a panel. It's going to be the group, the squad. But, man, we just really appreciate it and just honestly, just from a community perspective, love what you're doing in the community and just keep encouraging you to keep bringing others along, man. So keep up the good work. Really, really love what you're doing out there, brother.

Speaker 1:

Thank you. Thank you so much for everything you're doing and for the conversation today. We've really hopefully our listeners have enjoyed learning a lot more about AI, a little bit more in depth, versus, you know, just having the surface conversation about you know what it is and what it does, and so I think this was really informative for you know, not my, not only just myself, but I think I've learned quite a bit just from from listening to you, but hopefully our listeners have as well, and we'll definitely have to continue in a in a part two.

Speaker 3:

Yeah, thank you for having me guys.

Speaker 1:

Yeah, absolutely Well, take care, Saheem, appreciate it and everyone that's been listening. Have a wonderful rest of your day and we will see you in the next episode. Thanks everyone.

People on this episode