Ktor is designed to be flexible and extensible. It is composed of small, simple pieces, but if you don’t know what’s happening, then it is like a black box.
In this section, you will discover what Ktor is doing under the hood, and you will learn more about its generic infrastructure.
Table of contents:
You can run a Ktor application in several ways:
main
by calling embeddedServer
EngineMain
main
function and using a HOCON application.conf
configuration filewithTestApplication
from the ktor-server-test-host
artifactTo begin with, this immutable environment has to be built; with a classLoader, a logger, a configuration, a monitor that acts as an event bus for application events, and a set of connectors and modules, that will form the application and watchPaths.
You can build it using ApplicationEngineEnvironmentBuilder
,
and handy DSL functions applicationEngineEnvironment
, commandLineEnvironment
among others.
There are multiple ApplicationEngine
, one per supported server like:
Netty, Jetty, CIO or Tomcat.
The application engine is the class in charge of running the application,
it has a specific configuration, an associated environment and can be start
ed and stop
ped.
When you start a specific application engine, it will use the configuration provided to listen, to the right ports and hosts, by using SSL, certificates and so on, with the specified workers.
Connectors will be used for listening to specific http/https hosts and ports.
While the Application
pipeline will be used to handle the requests.
Application Pipeline:
It is created by the ApplicationEngineEnvironment
and it is initially empty.
It is a pipeline without a subject that has ApplicationCall
as the context.
Each specified module will be called to configure this application when the
environment is created.
When you run your own main method and call the embeddedServer
function,
you provide a specific ApplicationEngineFactory
and
an ApplicationEngineEnvironment
is then created or provided.
Ktor defines one EngineMain
class per each supported server engine.
This class defines a main
method that can be executed to run the application.
By using commandLineEnvironment
it will load the HOCON application.conf
file from your resources and will use extra arguments to determine which modules to install
and how to configure the server.
Those classes are normally declared in CommandLine.kt
source files.
io.ktor.server.cio.EngineMain.main
io.ktor.server.jetty.EngineMain.main
io.ktor.server.netty.EngineMain.main
io.ktor.server.tomcat.EngineMain.main
For testing, Ktor defines a TestApplicationEngine
and withTestApplication
handy method,
that will allow you to test your application modules, pipeline, and other features without
actually starting any server or mocking any facility.
It will use an in-memory configuration MapApplicationConfig("ktor.deployment.environment" to "test")
that you can use to determine if it is to run in a test environment.
Associated with the environment is a monitor instance that Ktor uses to raise application events. You can use it to subscribe to events. For example, you can subscribe to a stop application event to shutdown specific services or finalize some resources.
A list of Ktor defined events:
val ApplicationStarting = EventDefinition<Application>()
val ApplicationStarted = EventDefinition<Application>()
val ApplicationStopPreparing = EventDefinition<ApplicationEnvironment>()
val ApplicationStopping = EventDefinition<Application>()
val ApplicationStopped = EventDefinition<Application>()
Ktor defines pipelines for asynchronous extensible computations. Pipelines are used all over Ktor.
All the pipelines have an associated subject type, context type, and a list of phases with interceptors associated to them. As well as, attributes that act as a small typed object container.
Phases are ordered and can be defined to be executed, after or before another phase, or at the end.
Each pipeline has an ordered list of phase contexts for that instance, which contain a set of interceptors for each phase.
For example:
The idea here is that each interceptor for a specific phase does not depend on other interceptors on the same phase, but on interceptors from previous phases.
When executing a pipeline, all the registered interceptors will be executed in the order defined by the phases.
The server part of Ktor defines an ApplicationCallPipeline
without a subject
and with ApplicationCall
as context.
The Application
instance is an ApplicationCallPipeline
.
So when the server’s application engine handles a HTTP request, it will execute the Application
pipeline.
The context class ApplicationCall
contains the application, the request
, the response
,
and the attributes
and parameters
.
In the end, the application modules, will end registering interceptors
for specific phases for the Application pipeline, to process the request
and emitting a response
.
The ApplicationCallPipeline
defines the following built-in phases for its pipeline:
val Setup = PipelinePhase("Setup") // Phase for preparing the call, and processing attributes
val Monitoring = PipelinePhase("Monitoring") // Phase for tracing calls: logging, metrics, error handling etc.
val Features = PipelinePhase("Features") // Phase for infrastructure features, most intercept at this phase
val Call = PipelinePhase("Call") // Phase for processing a call and sending a response
val Fallback = PipelinePhase("Fallback") // Phase for handling unprocessed calls
Ktor defines application features using the ApplicationFeature
class.
A feature is something that you can install
to a specific pipeline.
It has access to the pipeline, and it can register interceptors and do all sorts of other things.
To illustrate how features and a pipeline tree work together, let’s have a look at how Routing works.
Routing, like other features, is normally installed like this:
install(Routing) { }
But there is an easy method to register and start using it, that also installs it if it is not already registered:
routing { }
Routing is defined as a tree, where each node is a Route
that is also a separate instance of an ApplicationCallPipeline
.
So when the root routing node is executed, it will execute its own pipeline. And will stop executing things once
the route has been processed.