Posts Tagged localai

Low-Code LLM Apps with LocalAI, Flowise, and Pulumi on AWS

Low-Code LLM Apps with LocalAI, Flowise, and Pulumi on AWS

In a previous blog post from me, we discussed how easy it is to build your ๐Ÿฆœ๏ธ๐Ÿ”— LangChain LLM application and use ๐Ÿฆœ๏ธ๐Ÿ“ LangServe and Pulumi to deploy it on an AWS Fargate cluster. We even went a step further and deployed a Pinecone index, all in a few lines of Pulumi code, to provide a vector store for our LLM application.

Let me walk you this time a different path on creating a LLM applications. This LLM-powered application is using Flowise, a low-code/node drag & drop tool to visualize and build our LLM application and LocalAI. LocalAI is a local inference engine that allows us to run LLMs locally or on-prem with consumer grade hardware. Everything will be deployed on an AWS EKS cluster using Pulumi and TypeScript.

Read more →