Function as a Service (FaaS) is a growing trend in cloud computing. It's an opinionated approach to helping users write cloud applications that scale. A central tenant of cloud functions (and what really makes them effective) is their stateless nature. They handle requests then disappear. Nothing is ever permanently "on" which is why they're so cost effective for the cloud provider to host and manage.
Of course, there are times when it does make sense to handle state. But this does not mean you should clutter your stateless function code with state-related functionality. This is better left to an orchestration engine or similar event-driven strategy that is external to your cloud function. This gives you a proper separation of concerns between what is stateless and what is not (which is what you're really after with serverless architectures).
Function as a Service (FaaS) is a common approach to serverless architectures across cloud vendors. FaaS works because it’s opinionated. Each cloud vendor has their own offering with constraints (AWS Lambda, Azure Functions, Google Cloud Functions, to name a few). What the customer gets in return is true utility pricing and the promise of a scalable architecture built of stateless building blocks.
Successful deployments will require a different mindset than used for monolithic apps. In my situation, I’m tilted towards stitching it all back together, given that I work in integration. Once that monolithic app is broken up into stateless functions, how is it best joined together into something that behaves like the original app?
Calling APIs and conversing with people share many overlapping qualities. From an engineering perspective, inputs, outputs, and errors are common patterns to each. But people are particularly challenging when conversing on their terms. This blog details how to handle the complexities of human conversation using RESTful principles and shows how to unify process modeling concepts with natural language processing (NLP) to orchestrate complex use cases.
There is already a lot of literature about the rise and fall of chatbots, and the rise of bots that don’t chat. For someone who has not followed this space closely, this statement may not make any sense. However, it pretty much summarizes the evolution of the chatbot frenzy of the last 15 months. We’re strong believers in this space and as such, we’ve been looking at ways to better articulate the value of chatbots.
Messaging apps are a viable deployment platform for your next chatbot, app, or whatever-you-want-to-call-it. But deploying a solution atop a messaging app has its own unique characteristics--both opportunities and pitfalls when compared to other platforms like iOS, Android and the Web. Whether you’re building a customer-facing chatbot or an internal productivity tool within an enterprise, there are some principles we’ve found that aid in getting the most out of your chosen messaging app.
At Intwixt, we feel the chatbot hype is well-deserved, particularly for the enterprise. When executed correctly, the proper approach substantially reduces UI development and maintenance costs while also delivering an intuitive product with greatly reduced training and deployment costs. It’s a win-win for those who author UIs and those who use them. The following sections detail our approach.
There are many opinions about what distinguishes bots from apps. The most common is that apps are visual and bots are conversational. It’s a nice shorthand, but it’s important to not get caught up in strict dichotomies. Bots can be visual and apps can likewise be conversational. It’s more about the primary perspective for each. And with bots the primary perspective is the message stream.
At Intwixt we see the most value in building intelligent, rule based bots that leverage both the rich, non-chat based UI experience (such the one offered by Messenger) and certain AI aspects such as NLP. While many platforms promote one approach over another, we see value in providing a framework that encapsulates the best of both: the programmed intelligence provided by the guided interactions and the artificial intelligence of NLP. Our process-first approach is capable of delivering the rules you need to define your bot's intelligence, while our integration-based architecture lets you tightly integrate the best NLP platforms available.
In this blog we are going to show how easy it is to create a Messenger chatbot that understands unstructured user input. We'll use Google’s API.AI for natural language processing (NLP) support. The bot will analyze the messages sent by users and respond appropriately. If a message is a greeting, the bot will respond with a greeting. If a message is not recognized, the bot with respond with a static message.
Learn how to create a NoSQL data table with a RESTful API. Deploy and test, using the Swagger Test Framework.
Learn how to read and write data from your Messenger Chatbot. Send a special greeting to first time users and track historical data for returning users.
Composability is a critical feature of Intwixt bots. Learn how to combine multiple Intwixt bots into a single bot to extend and evolve its capabilities.
Learn how to call an external HTTP API from your bot. Learn how to define a global data model and make it available to all of your bots. Leverage global data models as you define how to map and call external HTTP APIs.
Learn how to create a Bot with a RESTful API. Contact your bot using REST/JSON.
Evolve your Messenger Chatbot using the Intwixt Bot Designer. Learn basic data- and process-modeling concepts.
Build your first Messenger Chatbot. Use a prebuilt bot template and confirm your Facebook Messenger credentials with Intwixt