Botfront.
 Github
Authoring conversations
Monitoring & Analyzing
Configuring channels
On screen guidance
Proactive conversations
Installation
On your local machineOn servers or clustersKubernetes / OpenShift installationCreate a cluster from scratchSingle server installation (VM)ConfigurationUsing the CLIMigration guide
Developers
Administration
Deployment

Install Botfront on servers or clusters

This section will explain how to deploy your Rasa assistant with Botfront on a remote server or Kubernetes / OpenShift cluster

Kubernetes / OpenShift installation

We provide Helm charts to install Botfront and Rasa projects on your cluster.

Please head to http://github.com/botfront/botfront-helm and follow the instructions.

Create a cluster from scratch

If you are looking to create a new cluster for the purpose of hosting Botfront and Rasa projects, you can check the following Terraform project: https://github.com/dialoguemd/botfront-terraform-gcp It is very specific to Google Cloud, but you can adapt it to the cloud provider of your choice.

Single server installation (VM)

An alternative option is to deploy all the services on a single machine. You can get a virtual machine from the Cloud Provider of your choice. We recommend a machine with at least 1 CPU and 2 Gb of RAM

This is for experimentation only. The following installation is not secure and not suitable for production.

  1. Create a virtual machine with Ubuntu installed, and note the external IP address. For this tutorial, we’ll assume the IP address is 123.99.135.3
  2. Install Node.js
wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.1/install.sh | bash
export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
nvm install lts/erbium
  1. Install Docker and Docker Compose
sudo apt-get -y update
sudo apt-get -y remove docker docker-engine docker.io
sudo apt -y install docker.io
sudo systemctl start docker
sudo systemctl enable docker
sudo apt install curl
sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
  1. Install Botfront
npm install -g botfront
botfront init # create a project
  1. Edit the botfront.yml file
nano .botfront/botfront.yml

In the env section, change the root_url to the machine IP address (leave the port 8888 unchanged)

env:
...
root_url: 'http://123.99.135.3:8888'
...
  1. Launch Botfront
botfront up
  1. Open Botfront in your browser (http://123.99.135.3:8888) and setup your project.

Look up the ID of your project. In the example URL http://123.99.135.3:8888/project/mSvL3ccZaz5RimDpF/, mSvL3ccZaz5RimDpF is the ID.

Run botfront set-project <project_id>. Rasa will restart and fetch that project’s configuration.

  1. Go to settings/credentials and change the base_url host to the IP address (keep the host unchanged)
rasa_addons.core.channels.webchat.WebchatInput:
session_persistence: true
base_url: http://123.99.135.3:5005
socket_path: "/socket.io/"
  1. Botfront is ready to use.

Configuration

Services and Docker images

The table below lists all the services that can be used with Botfront.

ServiceDocker image
botftontbotfront/botfront
botfront-apibotfront/botfront-api
rasabotfront/rasa-for-botfront
ducklingbotfront/duckling
actionsBuild your own
mongomongo or hosted service (mLab, Mongo Atlas, Compose, …)

Image tags

It is not recommended to deploy the images witout tags or with the latest tag. Look in the .botfront.yml for the tags corresponding to the version of Botfront you are using.

Duckling (a structured entity parser developed by Facebook) is not strictly required if your NLU pipeline doesn’t use it.

Also, be very careful with your choice regarding MongoDB. If you decide to just run it as a container, be sure to at least properly mount the volume on a physical disk (otherwise all your data will be gone when the container is destroyed) and seriously consider scheduling back-ups on a regular basis.

Using a hosted service such as MongoDB Atlas is highly recommended, some of them even include a free plan that will be more than enough for small projects.

Environment variables

The following table shows the environment variables required by each service. Be sure to make those available as arguments or in the manifest files of your deployment

Environment variableDescriptionRequired by
ROOT_URLThe Botfront app URL (e.g. https://botfront.your.domain)botfront
MONGO_URLThe mongoDB connection string (e.g. mongodb://user:pass@server:port/database)botfront botfront-api
MONGO_OPLOG_URLThe mongoDB Oplog connection stringbotfront (optional)
MAIL_URLAn SMTP url if you want to use the password reset featurebotfront
BF_PROJECT_IDThe Botfront project ID (typically bf)rasa
BF_URLThe botfront-api root urlrasa actions
API_KEYGraphQL API key. You can then set the authorization header to the API_KEY value to perform GraphQL operationsbotfront rasa
GRAPHQL_REQUEST_SIZE_LIMITThe size over which requests sent to the GraphQL server will receive a HTTP Error 413 (Payload Too Large) response. Defaults to 200kb and follows Express specs.botfront (optional)

The following table list environment variables used to configure logging within botfront and botfront-api. Make those available as arguments or in the manifest files of your deployment

Environment variableDescriptionAllowed valuesdefault
APPLICATION_LOG_TRANSPORTWhere the application log are writtenconsole, stackdriverconsole
GOOGLE_APPLICATION_CREDENTIALSPath to google cloud service account file, required if you use stackdriver (see https://cloud.google.com/docs/authentication/getting-started for details)none
APPLICATION_LOG_LEVELMinimal severity of logscritical, error, warn, info, debuginfo
APPLICATION_LOGGER_NAMEThe name of the application logger in StackdriverAny stringbotfront_log_app
MAX_LOGGED_ARG_LENGTHMaximum length of stringified args for readability and data usage concerns. Only affecting info logs and when APPLICATION_LOG_LEVEL is set to info. Args are never trucated when APPLICATION_LOG_LEVEL is set to debugAny positive integer1000
MAX_LOGGED_DATA_LENGTHMaximum length of stringified data in request to the api for readability and data usage concerns.Any positive integer100
APPLICATION_API_LOGGER_NAMEThe name of the api application logger in StackdriverAny stringbotfront-api_log_app
APPLICATION_API_LOG_TRANSPORTWhere the api application log are writtenconsole, stackdriverconsole
APPLICATION_API_LOG_LEVELMinimal severity of logs for the apierror, warn, info, debuginfo
MAX_LOG_BODY_LENGTHMaximum length of stringified body in logged request or response from the apiAny positive integer100
AUDIT_LOG_TRANSPORTWhere the audit log are written (Only on Entreprise-edition)console, stackdriverconsole
AUDIT_LOGGER_NAMEThe name of the audit logger in Stackdriver (Only on Entreprise-edition)Any stringbotfront_log_audit

Volumes

Although volumes are technically not required for Botfront to run and work, if you do not mount them your data will be gone when containers are destroyed.

VolumeDescriptionUsed by
/app/modelsWhere Botfront stores the model retured by Rasa when the training is completedbotfront
/app/modelsWhere Rasa loads a model from when it startsrasa
/data/dbWhere MongoDB persists your datamongo

/app/models should be mounted on the same location so when Rasa restarts it can load the latest trained model.

MongoDB database considerations

It is highly recommended (but optional) to provide an oplog url with MONGO_OPLOG_URL. This will improve the reactivity of the platform as well as reduce the network throughput between MongoDB and Botfront.

IMPORTANT: choose a very short database name

Choose a very short database name (e.g bf) and not too long response names to avoid hitting the limits.

Indicative minimal technical requirements

Those are the minimal requirements:

ServiceRAMCPU
botfront1 Gb1
botfront-api128 Mb0.5
duckling512 Mb0.5
rasa (depending on NLU pipeline and policies)1-8 Gb1-4

Endpoints

Endpoints let you define how your Rasa instance communicates with Botfront and the actions server:

  • the Botfront API to query the bot responses (nlg)
  • the actions server (action_endpoint)
  • the tracker store (tracker_store)
nlg:
type: 'rasa_addons.core.nlg.GraphQLNaturalLanguageGenerator'
url: 'http://botfront:3000/graphql'
action_endpoint:
url: 'http://actions:5055/webhook'
tracker_store:
store_type: 'rasa_addons.core.tracker_stores.botfront.BotfrontTrackerStore'
url: 'http://botfront:3000/graphql'
🖊️ Edit this page on Github