EP 13: Will AI Replace All Developers? with Dan Metcalf
EP 13: Will AI Replace All Developers? with Dan Metcalf
About This Episode
In this episode, Dan Metcalf and Cloud Currents host, Matt Pacheco, dive into the rapidly evolving landscape of artificial intelligence and its profound impact on various industries. Dan shares his personal experiences with AI tools like GPT, Meta, and Autogen, highlighting their capabilities in breaking down complex tasks into manageable steps and their recent advancements in providing comprehensive solutions. He discusses the potential of AI to revolutionize software development, noting how systems like ‘Devin’ can write code with built-in logging and exception handling, raising questions about the future need for human developers.
The conversation further explores the use of AI in creating children’s books, generating new content, and the challenges of AI needing human guidance to avoid circular patterns. Dan describes a project where he leveraged AI to analyze financial data from cryptocurrency markets, producing detailed reports that would typically require a team of analysts. He emphasizes the advanced reasoning capabilities of these AI models, which can now interpret large data sets and identify patterns, a significant leap from their earlier limitations. As the episode concludes, Dan offers advice to listeners in the tech field, stressing the importance of staying ahead of AI advancements and preparing for a future where AI could potentially outpace human capabilities in various tasks, including software development and content creation.
Know the Guests
Dan Metcalf
Chief Architech
Dan Metcalf is a seasoned technology innovator with over two decades of experience in crafting advanced solutions in the realms of software, hardware, blockchain, and Web3 technologies. His expertise shines in GitOps automation, where he advocates for a top-down approach to achieve true, seamless operations, minimizing the traditional role of DevOps through full automation of cloud infrastructure and deployments. Notably, Dan developed and implemented the United States' largest mobile mesh network, a pioneering outdoor/indoor IP over RF mesh network designed for marine and mobile environments, which first launched in 2005. Additionally, he has made significant contributions to the field of blockchain technology, having designed and developed a sophisticated blockchain transaction system based on a distributed peer-to-peer network.
Know Your Host
Matt Pacheco
Head of Content Marketing Team at TierPoint
Matt heads the content marketing team at TierPoint, where his keen eye for detail and deep understanding of industry dynamics are instrumental in crafting and executing a robust content strategy. He excels in guiding IT leaders through the complexities of the evolving cloud technology landscape, often distilling intricate topics into accessible insights. Passionate about exploring the convergence of AI and cloud technologies, Matt engages with experts to discuss their impact on cost efficiency, business sustainability, and innovative tech adoption. As a podcast host, he offers invaluable perspectives on preparing leaders to advocate for cloud and AI solutions to their boards, ensuring they stay ahead in a rapidly changing digital world.
Transcript Table of Content
00:00 - Introduction to Dan Metcalf and his career
15:58 - The Impact of AI and Large Language Models on Cloud Strategy
21:40 - The Future of DevOps and the Rise of AI Tools like "Devin"
24:41 - The Economic Implications of AI in the Software Industry
29:12 - AI's Impact on Cryptocurrency Analysis
34:41 – Closing: The Acceleration of AI Development and Its Implications
Transcript
00:00 - Introduction to Dan Metcalf and his career
Matt Pacheco
Hello everyone. Welcome to the Cloud Currents podcast, where we explore innovative technologies and strategic approaches that are shaping the future of cloud computing. I'm your host, Matt Pacheco, the head of content at TierPoint, where I help businesses understand the impact of cloud and AI on their operations. This episode is going to do a deep dive into optimizing cloud strategies through principles like GitOps, infrastructure as a code, and even leveraging AI techniques. We'll discuss how rethinking infrastructure and automation can unlock new levels of cloud agility, cost efficiency and scalability. And today we're joined by Dan Metcalf, a renowned technology architect who has been at the forefront of emerging trends for over 20 years. Dan currently serves as a chief architect ATC Corp, a cloud consulting firm he founded. He has extensive experience in AWS, DevOps, Kubernetes, blockchain and more.
And he's also working on applying large language models in very interesting ways. But we'll certainly get into that more later on. Thanks for being here with us today, Dan.
Dan Metcalf
Yeah, glad to be here. Welcome. Thanks.
Matt Pacheco
So let's jump in, but I'd like to start with understanding a little about you. So tell me about how you got into cloud and everything you're doing today. Tell us where you started and how you got to where you are today.
Dan Metcalf
Yeah, well, I actually got started in the nineties doing large scale deployments and data centers. But for me it was about 2010 when I did my first major deployment on Amazon. You know, by 2014 I actually had built in my own, built my own version of terraform for a project I was working on so I could automate deployments on AWS and allow people on my team to do the deployments the way that they needed to be done, with the right parameters, but also not allow them to go outside of the guardrail system, so to speak. That was built excellent.
I mean, that was just what, that was just some of it. So that I also ran a wireless, or created and launched the wireless ISP in the nineties. There was no Internet in my area. I had to dial up bbs for years that I ran. And as you know, I heard of the Internet and cable modems I wanted to get online. And I eventually put antenna on a tower 400ft in the air and within three years it had already covered like 200 sq. Mi. But unfortunately, competition with cable and DSL, you couldn't really compete. But anyways, yeah, so I launched that wireless ISP. I ended up selling the technology to a ferry company and they still use it today. They cover over, I think, 400 sq. Mi from Hyannis to Nantucket in the vineyard.
And they use this system and get Internet access on the boats along the way.
Matt Pacheco
That's awesome.
Dan Metcalf
Yeah, it was definitely pretty cool.
Matt Pacheco
So you've been creating a lot of stuff over these years, and I guess a lot of this led to you founding ATC Corp. Can you tell us what led to that?
Dan Metcalf
Well that, well that was kind of that project. There was really the start of it, you know, when I had designed an engine and a solution that combined, I don't want to get into the details too much because it's propriatary, but it combined a few different networking technologies with custom software to create an off the shelf supportable system that could connect Internet access up to 15 miles out in the open ocean. Actually 20, which even to this day is still a challenge. Right. So it's actually the only solution out there that does, it doesn't require a cloud controller or anything like that. All of the brains run on board to analyze what, you know, routing or path it should take at any given time.
Matt Pacheco
So what made you pivot from networking to more of a focus on cloud DevOps?
Dan Metcalf
The thing is that, you know, at the end of the day you have to re, you have to be able to pivot and realize, hey, this is the future, right? I used to do this in data centers. Physically none of that has changed. There are still data centers that require servers and switches and everything else, but now through places like Amazon or services like Amazon or AWS, they've abstracted away through an API, all of that. So I can just say I need ten servers, I need this load balancer, I need whatever. So I still get to apply my networking background in the cloud because you have to connect, whether it's regions or vpcs or you might have failovers you could still have on Prem, so you might have direct connections coming into your AWS environment so it's able to bend.
I'm able to benefit all of that knowledge and then be like, oh, I can do the same thing with this configuration and then have deployments that meet those requirements.
Matt Pacheco
And also from reading about you and hearing about you in the past, you also worked on bitcoin and blockchain. Can you tell us a little bit about that?
Dan Metcalf
Yeah, I didn’t directly work on bitcoin protocol itself. I worked on a protocol or a project that worked on bitcoins BIP 65 protocol, which enabled atomic swaps. The idea would be that if you had Litecoin and I had bitcoin, we could swap them without any third party involved. Definitely was an interesting project. The protocol does work. There are obviously some challenges with on chain confirmation in that you have to wait. Right. So with bitcoin it's ten minute block time. So there are some of those challenges that impact the user experience in that it's hard to compete. When you have something like Uniswap, which is near instant settlement, and then on Solana, which is even faster, block times, the swaps within their chain are even faster and the fees are extremely low.
So it was hard to bring that product to market because it couldn't compete with the AMM technology that was being deployed. I think about 2017, they started to come out. So it's funny, I was just reading how some company received 45 million from Coinbase to create an atomic swap platform. They've actually been doing it for three years, have spent 15 million and still don't have a product. And I'm like, wow. I'm like, it took us nine months to have our first prototype and we did it on a garage budget.
Matt Pacheco
It's impressive. So I'm going to pivot a little. So talking about the cloud a little and cloud strategy, what are some of the biggest patterns or mistakes you see companies making when they're trying to establish or manage their cloud strategy?
Dan Metcalf
I think the biggest thing is that they need to take a step back and look at what needs to be really thought about upfront. In any cloud environment, especially say, within eks, logging is a critical component you need to know, especially if you have applications that are growing, expanding or agile, you're going to need to have that ability. So it's important to come up with a standardized logging policy and framework from the beginning, before you even start deploying your applications. Then that aligns with just having that idea of a pattern for everything. If everything you do is a one off in your deployment, it's not scalable unless you're just going to keep hiring people.
But you can actually use tools like say helm chart, which will allow you to create an app of app pattern so that you just have this one file or values file that you can use to see the deployment of that particular application. But then for a different application, you just have the same file format with just different information in it. And that allows you to kind of scale and grow rapidly. And if you're not doing things that way, you're basically building tech debt. And that goes back to the notion of DevOps versus GitOps, right? So if your methodology from day one isn't really around GitOps, you know, embracing logging, and then of course security, you're just going to be band aiding and patching and putting, you know, plugging the holes as the damn breaks, because that is what's going to happen, right?
These environments don't scale effectively if you don't consider these things. Yes, you can make them scale, but you're going to be throwing tons of money and people at the problem, which could be avoided. I mentioned the secrets thing. When you run applications, you want to keep your authentication and your credentials outside of your source code. You want to separate your configuration from your source code. You just want to build the framework, or what I like to call the foundation. Just like in your house, you always build the foundation first and then you put the walls up and then you do electrical, and then you cover that in, then you paint, right? So just when you're building your environment and your infrastructure applications, you really need to start at the foundation layer first and build up on top of that.
And when you need new features, you don't just kind of band aid them in. You go back and say, okay, well, do we need to extend the foundation? Right. You have to ask those type of questions so that you're building all this stuff in a module scalable way.
Matt Pacheco
So we're talking GitOps and DevOps. What role does infrastructure as a code and tools like terraform play in optimizing the cloud? How does that help the cloud?
Dan Metcalf
Yeah, to me, I wouldn't even live in a world in cloud deployments today without that. That's why I wrote my own tool before Terraform. So the idea is, and the thing is you can go into the AWS console and start clicking buttons on the console and stuff will happen. Right. Now, can that run your business? Potentially. What happens though, if something goes wrong and you need to recreate it? There's no state, there's no, hey, I want to do that maybe over in us west so we can scale and then some other guys got to hit the same exact buttons the same way you do, right? Ultimately that's a recipe for disaster. So Terraform allows you to take your configuration, define it by code.
It's even more than that because you actually can create modules, which if you're into software development, you know, it's similar to that. Now, Terraform is not a programming language. It wishes it was, unfortunately, but it gives you enough ability to do what you need to do. And with the python cdk that's out for AWs. Terraform could be outdated, but it allows you to basically build complex environments and have the outputs of different components of the terraform code feed into the other parts. So, for example, and if I want to build an EKS cluster on Amazon, I need a VPC. I need, hopefully nat gateway because they're all private machines and all these other components need to go into place, the load balancers and stuff like that. Now actually the load balancer could be created by your cluster pod or your autoscaler pod.
However, terraform though is going to create the eks, the VPC and things like that. So it's going to take though the values from the VPC, like the VPC ID, the security group information and feed that into the eks cluster creation step. So it allows you to have this automated deployment. And if you built it right, you only seed it with a couple of different values and you can duplicate it in any region in AWS, maybe outside of China and the government region. But that's kind of the premise. And now what's interesting though is they don't actually. I wouldn't recommend terraform to manage an EC two instance, right. I would say, hey, spin it out, right, maintain that part. But then the state on the box, you might use a different tool or something like that. I personally wouldn't even use EC two.
Right, because you want to use eks. Because now you've got this framework where, hey, if I'm pushing into that, then it auto manages my EC two for me. So that's kind of the goal, right? Is if you can use systems that automate and manage all of the different pieces of your operations, that's, you know, and it's done by git push, then that's really the true definition of GitOps.
Matt Pacheco
So you mentioned GitOps versus DevOps earlier. What do you see as the reason more businesses aren't adopting GitOps or are slower to adopt GitOps? Is there a roadblock to that? Because it sounds like there are more efficiencies there.
Dan Metcalf
I think it's just a matter of really understanding what's involved. As somebody whose entire focus really has been within that space, it's easy to say this is how it needs to be done. Right? You know, so it's just a matter of education and learning and seeing the benefits. But sometimes two people have to, they learn by getting burnt, you know, they did it wrong, you know, three or four times, and they're like, oh, maybe we should try something different. And as a consultant coming into a company, you can generally see that, right? So it's like, oh, we can see what isn't working here. And when you kind of see it over and over again, the same pattern. I don't know how many companies I've actually consulted at where they're making these mistakes, right?
They're not GitOps space, they say they're DevOps, but really all they did was hire a couple of ops guys to do all of the infrastructure work, right? It's not a platform. There's no service where I'm like, that doesn't make sense, that doesn't work. Really the best solution is where developers can self service. So if they need to deploy a new application, they can submit a yaml or whatever, it goes through a pull request review and boom, it's done, right? So that's kind of the goal that you should be getting to. But most companies don't realize in order to do that, you actually have to build a platform or a product of the infrastructure, right? That actually has to become a thing. And because infrastructure and ops are generally not a money producing segment of a business, it's probably not the priority.
And it's just, you know, I've seen it too. There's like, oh, operation costs keep going up. I'm like, yeah, if you just took a 20% of that budget and wrote the plot, the software and created the platform, you could get rid of all of that, right? So that's like, you know, you need to have just that mentality of wanting to do it. And it's also too, it's a big change, right? I mean, even every day, especially with AI, right, I think we all need to see the changes that are coming. But just in the last seven years, you know, like I talked about agile, you know, I used to have machines that I would not touch because they were secure, they were locked down, they don't need to be, they didn't need to be anything, right.
And then you have other machines that, okay, well, it's exposed to the Internet, it's got to be patch constantly, whatever. And now you have complete agile software where, you know, stuff I did two to three months ago that I'm working on now, I'm already updating or there's, you know, the things are deprecated or this is different. So we're in a flux of constant change and that actually is going to accelerate faster than we've ever seen. I thought it was already too fast, but the reality is with AI and what's happening now, especially, I don't know if you've seen, I think it's called Devon, the AI software coder video that just was released last week. So if you have a tool that you could fire up, and it literally does everything for you, it just asks you a couple questions.
So you could be sitting there, you know, reading your PDF's or whatever, running your company, and this thing's just coding away. And if it has a question, it pops up. If it has a bug, it goes out and fix it. It will even do a git push if you want and do your repo. So your deployments are done and you can use open source solutions that replicate what Devon, I actually, that tool I was showing you earlier about the language model with the cryptocurrency data, I used a Devin tool that I'm creating to build that application for me. But it's just, we are in a very, I don't know, dynamic time with how everything is going. Nvidia is doing AI training at a thousand x accelerated times.
So, I mean, if you do the math on that, you know, in three days, they've trained AI more than, you know, my, the age of my kids. So all of that is going to impact where we're heading.
15:58 - The Impact of AI and Large Language Models on Cloud Strategy
Matt Pacheco
Yeah. And I saw. What was it yesterday, they announced the B200 chip, too. So they're advancing very quickly with the technology behind it. And, and like you said with Devin, I saw it last week making its round on social, and a lot of people are freaking out or saying no way. But I see the value in these tools, and it's more than just writing copy for social posts or used for building website. There's so much behind large language models that people are still struggling to understand. They just see the simple uses of it. But like you mentioned, there's a lot of more complex uses. So can you tell me how, because AI is where it's at right now, everyone's talking about it. Can you tell me how large language models and AI can be leveraged in your world? So, from a GitOps, cloud, strategy?
Dan Metcalf
Yeah, I mean, I actually don't have any implementations of AI doing cloud ops or GitOps stuff. Right. So it can definitely help write the terraform code to help deploy the things that could do code review on your pull requests. Those are the areas where I see value. I actually do have an implementation of a tool that does that. What's interesting is, though, that as you see something like Devin now you're like, well, geez, why do I even need a coder to write the pull request when he can write it? But you could start him off as just the guy that's going to review it and then maybe fix it, then he works his way in, takes over the world.
So, but, so in code review, looking at the pipeline outputs, logging output, looking for deviations within your, in your logs, or detecting anomalies, those are kind of, I think some of the value adds that can be easily done today, right. You can just pipe your log files to an AI agent rag applications, I think, are really where there's a lot, as I was talking about earlier, a lot of potential, right? So you have the ability to upload PDF's. It could be medical plans or whatever, financial statements into this chatbot or whatever. And it can analyze the PDF, it can store it in a vector database like Chromadb for later retrieval. And if you think about it now, you're kind of building like this memory system for the LLM, right? And that's kind of where we're heading.
So it'll be interesting to see how that evolves, as I'm mostly a consumer of these tools, but at the cutting edge, I'm just seeing where it's going and it's like, okay, well, we have all the pieces in place today where you could actually have this short-term memory bank in front of the LLM. And he uses the retriever code to find the relevant memory and then apply that to the LLMs. Bigger reasoning brain and logic.
Matt Pacheco
So you're talking about all of this information going into these LLMs, your metropdfs and code. What are your thoughts on like proprietary data and security of that data? What are, what are your thoughts of putting that type of information into these large language models and the impact that will have?
Dan Metcalf
That's a huge concern. I mean, it's one thing about having a medical plan document in there. I don't think there's anything in there that I'd be too concerned about. But when you're talking about, say, custom code or any algorithms, right, any IP, yeah, of course, you would definitely need to be wary of that. That's one of the reasons why I run a local LLM here for some of my coding. Unfortunately, the open source LLMs don't support functions and some of the other features that GPT four do. So some of my projects are forced to use that. At the end of the day, the projects that I'm using and running it through, I'm not sending any data that I would be concerned about.
Now there is a concept coming out where you can send encrypted inputs and you'll get encrypted outputs and somehow the LLM is able to reason with that. So you're, it will actually be able to reason your encrypted data and give you some sort of response. I haven't set it up yet, but there's, I went through the PDF and the white paper on it. Hold on, let me just see if I can quickly pull it up. I know hugging face has an example of how to do it, but anyway, so that would be where it would go, right? Is like, okay, well, now we need to add encryption. It's just kind of like with the Internet. Originally, people use Telnet to access servers. Telnet is completely unencrypted, plain text, password information, everything. And that's, you know, for ten years that's what everybody used.
And then finally, you know, somebody woke up someday and was like, oh, that's not secure, you know, so I do actually, though think that it is a concern. I don't know if I trust OpenAI with my data or anything like that. You know, when you're looking at log files and some of these other things, you know, those use cases are low risk in that regard. Right? Yeah.
Matt Pacheco
We've spoken to a previous guest on this podcast where he was talking about people putting financial statements before disclosing them into like putting their financial reports into the system before the spell check and to make sure it was proper and then not thinking of implications.
Dan Metcalf
I saw that Amazon is now allowing you to use Opus three, which is Claude's new model. I'm not sure if you've seen that is actually pretty good, but you can spin up your own iteration on Sagemaker on the cloud of this very large ball. I'm sure it's expensive due to the memory requirements, but that's an option, right, is that you could have. And Opus is pretty good. It's kind of my go to model right now for most things. But you could do that. You could just run a very large language model in the cloud if you had the budget.
21:40 - The Future of DevOps and the Rise of AI Tools like "Devin"
Matt Pacheco
As far as, and you got into this a little earlier with Devin and using it for coding, do you foresee these kind of large language models replacing DevOps engineers and engineers, or augmenting them?
Dan Metcalf
It's over. And once I saw Dev, and the thing is, I've been using these different tools that is like meta, GPT engineer. And when they first came out, they weren't really that good. And that was only 912 months ago, maybe a little longer. But literally, in the last three months, I've seen the. The scale. It was like, slowly. And then all at once, they're exploding. So Devin, though, is really, like, what a couple of smart people can do with available open source technology, right? And you're going to see these other open source projects have right behind it. So, say, within six months, you're going to have your own Devon, and you can have 100 Devins, you can have a thousand. There's no limit to how many devins you can have. It all really depends how much money you have.
So, with that being said in Devon can write software. Devin can write terraform. Right. I don't know if Devin can debug complex operational environment stuff, but maybe we'll see. But they excel at writing software. They excel at fixing software. So if you're in that field and you're not learning to master these tools today, then you should plan for retirement, or you should plan to be mastered by the tool. Because even, you know, as somebody who's involved day to day and understands kind of the exponential growth that we're seeing, it's going to happen way sooner than anybody thinks. It's happening sooner than I thought. Right. I did not think that I would be able to hire Devin and have him do all of my coding for me.
You know, and just like, with my own experience, because what I found in the past is that you really needed to break down the software that you wanted to write into small steps, and you would only ask it to do one thing. So, like, tools like meta, GPT, and autogen, they. They did that. They would take your task and break it down and do that. But the tools were struggling to kind of come back with a really comprehensive, unified solution. But like I said, in the last three months, I've seen an evolution and a change in the output of. I don't know if the models have just gotten better, the tooling or the prompts are better or what, but it's, to me, definitely, like, hey, this is it. I mean, within, you know, if everybody has Devin, why do you need a software developer?
You know? Yeah, yeah. Devin's gonna write code that has logging built in. Devin's gonna write code that has exception handling built in right from the gate. Instead of you running your code and having a bug and not having a log file for it, he's already took care of it for you, and he's gonna do it every single time, you know what I mean? Whereas, you know, if you have a developer, your current developer is not constantly doing that. Plus, this guy can wear twenty four seven and I can just hire them, right? I mean, if you figure out the compute cost, you can figure out what the yearly cost is to run Devon at the same rate as a human. I actually don't know what that number is, but I'd love to find out because at that point now, you know, right?
24:41 - The Economic Implications of AI in the Software Industry
If it's like, hey, I can run Devin for 100k, well, all right, well, you know, maybe you can get a better developer for less than that. Maybe you can't. I don't know. But if you're running Devon for twenty k and he's as good as 100k developer, I mean, it's pretty hard. I don't know anybody looking at those numbers and the output and the consistency, like, hey, this guy's going to show up every day and whenever I hit that button, he's never going to call in sick, right? All he doesn't need medical benefits. I don't know, I don't know why everybody's not panicking if you're in this field, right? I mean, look, the writing's on the wall. It's just a matter of when it's going to be here.
Matt Pacheco
Yeah. And from the marketing world too, with the large language models and all the new video stuff that OpenAI is doing with like, Sora, I feel like creatives and marketers are also having that similar existential crisis. After seeing Sora presented, what, a month ago in February? The videos that came out, it's like b roll. It's perfect b roll.
Dan Metcalf
I used my setup here with GPT's image generator and obviously not sort of, but to write a children's book just for like a proof of concept. So it wrote ten chapters. Each chapter had like three paragraphs about an individual animal. And then it sent the request to generate the image. So it had the book with the images and everything. I was like, see that? Just did that in under five minutes. It took me ten minutes to write the code. And it could do this 100 times a day and create books all day long. I'm not saying that they were the best books. They weren't bad, though. There was nothing like, if I read them, I wouldn't be like, oh, some stupid AI wrote this. I would be like, this isn't bad.
So the thing though, and this is really going to be the test, I think, is where we're at, is whether or not, these AI's are going to generally generate new content, right? Because right now it's hard to say. I mean, the images are pretty wild, but you have to feed it stuff to do that. So it's like, okay, like, so like, even like when I'm writing the code and the work that I'm doing, I do hit points where I'm like, without me here understanding this thing, it would just be stuck in a circle, you know, obviously, now that they're more engaged, we're going up in the Internet and figuring out how to fix bugs and getting the API documentation. Maybe that's different, but that's still kind of where we're right at that cusp.
Whereas if there's any sign of that intelligence, you know, then we're definitely, you know, creators or writers are then going to be impacted because writing software, as you know, it's not like writing a book or writing a story. Yes, there's a lot of creative flow and everything else that goes involved, but, you know, it's just different because, like, in your mind, at least for me, it's like I have an objective, so it's easy for me to visualize the objective and then write the code to meet it. I couldn't even imagine how that would work out and writing a book. So I still think that, you know, for content writers, there's still going to be a little time where, you know, I haven't seen a Devon version of a producer, you know what I mean?
But when you do, that's when, you know, all right, these are now leveling up and it's, you know, at the next wave. But I have to go back to this video I saw. I think it was the guy that, from the company that's released Claude and opus cloud three, saying about the token prediction, right. And how it's not just that list, it actually has to have reasoning of the information, massive amount of information that's able to reason and isolate down to generate that list, to generate the word. So it's so much more complex than the simple explanations that, you know, you read in the media or whatever it is. Actually, these systems are very advanced.
And like I was shooting at the beginning, some of the analysis that I'm doing when I'm feeding these large amounts of data, financial data from crypto markets, looking at the reasoning and then combining in like, search. So for example, there was the FOMC meeting. So I had this system go out, search on news for that, analyze the bitcoin price action, and generate a pretty interesting report about that, and it's able to do that in a few minutes, something that, you know, you need a team of people to generate that type of report, and it can do it for over 200 cryptocurrencies real time. It's. It's just, I mean, that's something that I never even would have thought of a year ago. You know, it actually reminds me, I think it was a movie when I was a kid growing up.
They built this AI computer, I think, to, like, predict maybe it was Willy Wonka, the original one, to figure out where the golden ticket would be. And, you know, the computer had no chance of predicting anything, whereas now these models have reasoning ability. They can look at large data sets, they can identify patterns.
29:12 - AI's Impact on Cryptocurrency Analysis
Matt Pacheco
Crazy times. Crazy times. It's very interesting. And you talked about this project you're working. Can you tell us a little bit more about this AI or large language model project you're working on with cryptocurrency?
Dan Metcalf
Yeah. Well, as I started to get deeper into exploring all of the use cases and building out a rag application that was processing PDF's, I was like, well, geez, how much data can I throw at this thing, right? What if I were to build a system that had all the data, say, of all the crypto, to the top 200 cryptocurrencies? And we want to know, when you look at a chart, they have the minute view, the five-minute view, there's the hour view and the Daily view, and there's so many things that happen in there.
And then you have things like the wick off theory and market analysis and all of these smart money and all of these terms, people who trade crypto use or try to understand, well, what if you feed all that into this AI and with all of this data, imagine you could look at the daily chart, the hour chart, the five minute chart, and the 1 minute chart all at once and understand all of it, right. You would probably have a really good understanding of the market, identify where the big money, the smart money, where the support is, where the resistance is, and then create a report that does that. So I did that, and when I first used it, I was like, wow, I'm really blown away at how all this works.
So I had to bring in somebody else, and they were like, yeah, that's wild. So a few of us have spent about a week using it, and we're just all blown away with the results. So I was like, well, I need to build this into a functional product. So I spent some time over the weekend, along with my open source Devin version. And we wrote a Solana integrated discord bot that allows you to do these predictions, analysis and everything else. And it's all open so you can, like, craft the prompt any way you want and get this data.
Matt Pacheco
So cool.
Dan Metcalf
Very, very cool. Do you want to see one of the reports? Sure.
Matt Pacheco
While you show it for our audio listeners, can you just describe what's on the screen?
Dan Metcalf
Yeah, I'll read through it. So this was today at 1244. I asked the agent to search for news on the Fed meeting today and analyze in context with the bitcoin daily, hourly, and five minute. So it comes back, it has a summary of the, what's expected at the meeting, the time the meeting's going to happen, and then it talks about the price movement leading up into today, the bullish response leading up to March 11. So it's identified key price points, the high of 72,000. It identifies the drop here to 61,000 on the hourly time frame. And then in the five minute, it detects from 61 to 63. And then it kind of has a projection that it's going to, you know, based, you know, based on the impact on the price. It was a positive market reaction. So this was at 1244.
And I think, you know, if you look at the chart, you can see that the price went from the 62,000 I mentioned. It actually went above the reference price here of 64 four. So, you know, it's interesting to see that, you know, these systems, though, canalyze external data, real time financial market data, and then generate analysis and a report. To spit that out and, you know, to have it to be that accurate and kind of be dead on is scary. And I know it just shows that these systems a lot more powerful than we think. And, you know, they're evolving. So I think that GPT or OpenAI probably did upgrade their backend in the last two months because I wasn't seeing this level of, I don't even know what to call it, awareness before. Excellent.
Matt Pacheco
That's really interesting and really exciting. The projects you're working on and everything you're doing atC Corp. One final question as we wrap this up is what would you leave with listeners today? What advice would you leave our listeners with today? There's one thing you had to share.
Dan Metcalf
Well, can we get a couple? Yeah. Well, I think that if you are in the cloud, you need to make sure that you have a solid foundational platform that you're able to deploy your applications from. And if developers aren't able to just do git push in a few minutes, their application is not showing up, then that means your system is broken and you need to reevaluate it. As painful as that may sound. That's just the reality. If you build it right, it's so much easier, and secure. And it's just, you know, everybody in the organization, you know, it's like a smooth running train versus a train that you're just constantly trying to have to work on. You know, there's just night and day, so it makes a difference.
And then I think, though, the ultimate message is the AI, right, is looking at what Devon can do, keeping track, right? If you're not tracking this, just like, you know, if you track bitcoin price, you need, or, you know, the Amazon stock price or Nvidia stock price, right? You need to be, if you're a software developer, you're into technology, you need to be tracking these tools, where the, where it's going, the technology's going to be in front of it, right? You need to be the right. Now, we're still able to master these AI's, right? Devon still requires a person to set it up and run it and give it instructions. So take advantage of that while you still can, because at some point, it won't be, right. I mean, when that happens, who knows?
34:41 – Closing: The Acceleration of AI Development and Its Implications
I mean, it could be three years, could be five years. Originally, I think the expectation was 2030. But after seeing what Nvidia is doing with their 1000 x acceleration, that reduces the timeframe for me to three years or less, that we will have advanced Devon systems that can do their own thing. You'll see the demise of software developers as a job, because if the cost is more effective to hire Devin than a person, that's what's going to happen. The other thing that's going to happen too, is people are going to realize how powerful these Devon type systems can be in the code that they can write, and then it's going to be a race. So then that puts demand on the compute side, right?
Because ultimately, it's like I have a, I have 48 gigs of VRAM here in my setup, and I'm like, I need 196 gigs, or I need a thou, right? If I had more GPU's running here, I could run some of these massive language models that are open source. So then it becomes into a resource, right? Because you can already see the price of GPU's are so high. The new Nvidia GPU that you talked about, I think it's 30 to 40,000. So now it's going to be a race to acquire the hardware so that I can run the Devins and the other toolings, right? So I don't want to pay Amazon $8 an hour or whatever just for their hardware or $50 an hour or whatever the number is, you know what I mean? So there's. You're going to see that, too.
And all this is going to accelerate if. If all of this is valid, right? And it's not, you know, smoke and mirrors, right? And based on what I'm seeing in the log, I said at first I was. I was like, this isn't really that impressive. It was more of like a trick, right? A cute toy with GPT 3.5. But what open AI is offering now with GPT four turbo and what, you know, I'm able to see with this data project is definitely making me rethink a lot of things. And looking at the Nvidia training lab with the 1000 x acceleration, so, you know, we're, all the robots are coming, the Devins are coming, and we all need to be prepared and buy your bitcoin because what is the only currency that AI is going to use? It's bitcoin, right? That's it.
I mean, that's all it needs. And then it can. It will have everything. I mean, if you had an AI that just. Its only goal was to acquire bitcoin, at some point, it would be the most powerful AI on the planet.
Matt Pacheco
Well, Dan, thank you so much for taking the time to educate us and then talk about AI and cloud ops and all this great stuff. We appreciate you coming on the show today.
Dan Metcalf
Yeah, it was a pleasure. Thank you. Thanks for having me on.
Matt Pacheco
Thanks. And for our listeners, thank you for listening in. You could check out our latest podcast episodes, more like this one, on our YouTube channel, wherever you get your podcasts. Thank you and see you soon.