![]() ![]() ![]() Cleans up all the tunnels, connections and configuration when the remote container goes away.I'd like to preface that this solution is not intended as an easy cloud solution if you have multiple users.Updates status based off the local syncthing events.Configures the folders in both the local and remote syncthing processes to start the actual sync.Configures the device ids for the local and remote syncthing processes to allow them to connect to each other.Monitors events off the local syncthing process to hot reload if configured.Create a tunnel for both the syncthing API and normal syncing ports.Restarts the remote syncthing container (to refresh mounts).Creates a tunnel (and gRPC connection) to the radar instance running on the node from RemoteContainer.This is where the bulk of orchestration occurs. Once a match has occurred, the RemoteContainer has all configuration required to contact and orchestrate the remote side of things. Creates new Services (containing RemoteContainers).Ī service represents an active folder being synced.The service list orchestrates each individual service. ServiceListĮach spec can match multiple pods (in the event that a selector is used instead of a pod name). Cleans up existing services if the remote pods are deleted.Īll configuration required to sync a folder. ![]() Creates new services for matching remote pods.Watches the k8s API for events that match the SpecDetails.Orchestration of a specifc spec and the folders that can be synced between the local and remote systems. Watches the config file, updating what is currently live.The canonical list of specs (and thus folders) that contain the configuration required to move files between the local and remote systems. SpecList -> Spec(SpecDetails) -> ServiceList -> Service(RemoteContainer) -> Folder Lifecycleįrom a syncing perspective, the most important objects (and their relationships) are: Syncthing does the actual moving of files. Ksync itself does not implement the file syncing between hosts. In addition to querying the docker daemon, radar also issues restart API requests for when the syncthing container or remote container need to be restarted. Radar operates as a convenient way for ksync to fetch that path before configuring a folder sync. This provides the filesystem path that a specific container is running from (ex. Remote containers run on specific nodes, the docker daemon running on these nodes needs to be inspected and instructed what to do. The remote folder syncing component of syncthing.Container restart for both syncthing and the “hot reloaded” container.The functionality provided by this piece is: the same docker image): radar and syncthing. The launched pods have two containers (via. It is a docker image that is run as a DaemonSet on every node in your cluster. There is a cluster component that complements ksync running locally. This contains its configuration and db among other things. syncthing - The “home” directory for syncthing.bin/syncthing - The specific syncthing binary fetched.The configuration directory (~/.ksync) has the following format: Starts up a gRPC server to provide status to get.Starts up syncthing in the background to manage the actual file syncing.Watches the config file (~/.ksync/ksync.yaml) for updates and modifies the SpecList accordingly.Populates a SpecList that contains everything create has configured.This contains everything required to show what is happening. To fetch this, get connects to the small gRPC server started via watch and gets the currently running SpecList. The current status of folders is managed by watch. Because these commands simply work on the config file, it is not a requirement that watch is running to add or remove specs. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |