ray==2.48.0
and python==3.12.6
.
If you want to skip the examples then continue with the kodosumi development workflow and start implementing your custom agentic service with the kodosumi framework.
kodosumi-examples
kodosumi-examples
into your Python Virtual Environment. The kodosumi and Ray packages are automatically installed as a dependency.
CrewAI
and langchain
the installation of the kodosumi examples takes a while. All dependencies of the examples are installed. Please note that these dependencies are managed by Ray in production. See deployment.
.env
.
dotenv
before starting ray:
ray status
and visit ray dashboard at http://localhost:8265. For more information about ray visit ray’s documentation.
kodosumi_examples
:
kodosumi_examples.hymn
.
app
are available at URL http://localhost:8011/openapi.json. Launch another terminal session, source the Python Virtual Environment and register this URL with kodosumi panel:
config.py
and reads name=admin
and password=admin
. Launch the Hymn Creator from the service screen and revisit results at the timeline screen.
You can start another service prime
in a new terminal with
koco start
app
application object, Ray serve demands the bound fast_app
object. To test and improve your service run it with
serve deploy
command is used to deploy your Serve application to the Ray cluster. It sends a deploy request to the cluster and the application is deployed asynchronously. This command is typically used for deploying applications in a production environment.
--register
must connect to Ray’s proxy URL /-/routes
. With serve run
or deploy
the port defaults to 8000
and you start koco start
with the Ray serve endpoint http://localhost:8000/-/routes.
serve run
and serve deploy
feature single services. Running multiple uvicorn services is possible but soon gets dirty and quirky. For multi-service deployments use Ray serve config files.
In directory ./data/config
create a file config.yaml
with serve’s overarching configuration, for example
config.yaml
create service configuration files. For each service deployment create a dedicated configuration file:
hymn.yaml
prime.yaml
throughput.yaml
ray serve
and perform a Ray serve deployment
koco start
with the Ray serve endpoint http://localhost:8001/-/routes as configured in config.yaml
.
./data/config
. All files alongside config.yaml
are deployed. You can test your deployment setup with koco deploy --dry-run --file ./data/config/config.yaml
.
Where to get from here?