- installs kodosumi and all prerequisites along the kodosumi example flows
- starts Ray and kodosumi on your localhost
- deploys example flows
ray==2.48.0
and python==3.12.6
.
If you want to skip the examples then continue with the kodosumi development workflow and start implementing your custom agentic service with the kodosumi framework.
Install and run examples
STEP 1 - Clone and install kodosumi-examples
Clone and install the kodosumi-examples
into your Python Virtual Environment. The kodosumi and Ray packages are automatically installed as a dependency.
CrewAI
and langchain
the installation of the kodosumi examples takes a while. All dependencies of the examples are installed. Please note that these dependencies are managed by Ray in production. See deployment.
STEP 2 - Prepare runtime environment
You need an OpenAI API key to run some of the examples. Specify the API key in.env
.
STEP 3 - Start ray
Start Ray head node on your localhost. Load environment variables withdotenv
before starting ray:
ray status
and visit ray dashboard at http://localhost:8265. For more information about ray visit ray’s documentation.
STEP 4 - Deploy
You have various options to deploy and run the example services. kodosumi-examples repository ships with the following examples inkodosumi_examples
:
- hymn - creates a hymn based on a given topic. The example demonstrates the use of CrewAI and OpenAI
- prime - calculates prime number gaps. Distributes the tasks across the Ray cluster and demonstrates performance benefits.
- throughput - real-time experience of different event stream pressures with parameterized BPMs (beats per minute).
- form - demonstrates form elements supported by kodosumi.
kodosumi_examples.hymn
.
Alternative 1: run with uvicorn
You can launch each example service as a python module.app
are available at URL http://localhost:8011/openapi.json. Launch another terminal session, source the Python Virtual Environment and register this URL with kodosumi panel:
config.py
and reads name=admin
and password=admin
. Launch the Hymn Creator from the service screen and revisit results at the timeline screen.
You can start another service prime
in a new terminal with
koco start
Alternative 2: deploy and run with Ray serve
Run your services as Ray serve deployments. This is the preferred approach to deploy services in production. The downside of this approach is that you have to use remote debugging tools and attach to session breakpoints for debugging (see Using the Ray Debugger). Ray Serve is built on top of Ray, so it easily scales to many machines and offers flexible scheduling support such as fractional GPUs so you can share resources and serve many applications at low cost. With Ray serve you either run or deploy your services. Instead of the mechanics with uvicorn which refers theapp
application object, Ray serve demands the bound fast_app
object. To test and improve your service run it with
serve deploy
command is used to deploy your Serve application to the Ray cluster. It sends a deploy request to the cluster and the application is deployed asynchronously. This command is typically used for deploying applications in a production environment.
--register
must connect to Ray’s proxy URL /-/routes
. With serve run
or deploy
the port defaults to 8000
and you start koco start
with the Ray serve endpoint http://localhost:8000/-/routes.
Multi-service setup with Serve config files
serve run
and serve deploy
feature single services. Running multiple uvicorn services is possible but soon gets dirty and quirky. For multi-service deployments use Ray serve config files.
In directory ./data/config
create a file config.yaml
with serve’s overarching configuration, for example
config.yaml
create service configuration files. For each service deployment create a dedicated configuration file:
hymn.yaml
prime.yaml
throughput.yaml
ray serve
and perform a Ray serve deployment
koco start
with the Ray serve endpoint http://localhost:8001/-/routes as configured in config.yaml
.
./data/config
. All files alongside config.yaml
are deployed. You can test your deployment setup with koco deploy --dry-run --file ./data/config/config.yaml
.
Where to get from here?
- Continue with kodosumi development workflow
- See the admin panel screenshots
- Read about basic concepts and terminology