There are two major components to ksync, one that runs on your local system and another that operates on the remote cluster.
The local piece of ksync is operated via. the
ksync binary. It provides some general functionality:
These commands read and write the config file (~/.ksync/ksync.yaml). When a new spec is created, it is added to the spec list and then serialized to disk.
apikey: ksync context: "" log-level: debug namespace: default port: 40322 spec: - name: sunny-husky containername: "" pod: "" selector: app=app namespace: default localpath: /tmp/ksync remotepath: /code reload: true
Because these commands simply work on the config file, it is not a requirement that
watch is running to add or remove specs.
The current status of folders is managed by watch. To fetch this, get connects to the small gRPC server started via watch and gets the currently running SpecList. This contains everything required to show what is happening.
This is the main workhorse of ksync. It does a couple things:
The configuration directory (~/.ksync) has the following format:
There is a cluster component that complements
ksync running locally. It is a docker image that is run as a DaemonSet on every node in your cluster. The launched pods have two containers (via. the same docker image): radar and syncthing. The functionality provided by this piece is:
Remote containers run on specific nodes, the docker daemon running on these nodes needs to be inspected and instructed what to do. This provides the filesystem path that a specific container is running from (ex. /var/lib/docker/…). Radar operates as a convenient way for ksync to fetch that path before configuring a folder sync.
In addition to querying the docker daemon, radar also issues restart API requests for when the syncthing container or remote container need to be restarted.
Ksync itself does not implement the file syncing between hosts. It simply orchestrates that. Syncthing does the actual moving of files.
From a syncing perspective, the most important objects (and their relationships) are:
SpecList -> Spec(SpecDetails) -> ServiceList -> Service(RemoteContainer) -> Folder
The canonical list of specs (and thus folders) that contain the configuration required to move files between the local and remote systems.
Orchestration of a specifc spec and the folders that can be synced between the local and remote systems.
All configuration required to sync a folder.
Each spec can match multiple pods (in the event that a selector is used instead of a pod name). The service list orchestrates each individual service.
A service represents an active folder being synced.
The remote side of a spec. Once a match has occurred, the RemoteContainer has all configuration required to contact and orchestrate the remote side of things.
This is where the bulk of orchestration occurs.