Since we are currently working on ‘Senaite.Sync’ intensively- which until now allows users to copy their old ‘bika.lims’/‘bika.health’ or any ‘senaite.core’ / ‘senaite.health’ instances to a new instance, as well as keep it updated after a while by using ‘Complement Step’, I would like to make sure that the road we are following is totally correct.
Me and @xispa we discussing ‘Senaite.Sync’ and at some point we thought that maybe the current version of the Add-on is actually Migration Tool by itself. Without calling it ‘Senaite.Sync’ and adding more code, this tool can be used only and only to migrate instances. However, the real Synchronization Add-on, can depend on ‘Senaite.Migration’ and contain more code, views, workaround. So the users who want only to migrate their data, wouldn’t deal with Sync and get confused.
As a user or a developer of the community, please feel free to share your ideas for a better world : )
I think Migration and Sync should be different add-ons with different functionality.
Migration should be used to move data from Bika LIMS/Bika Health to Senaite LIMS/Core, or test Senaite LIMS/Health instance to a production instance.
Sync - should be used to bring Senaite LIMS/Health data in different instances into a central Senaite LIMS/Health Instance. e.g you have a client with various labs that are using individual instances of the LIMS (due to lack of connectivity, scaleability, speed … issues) and they want to have a central instance where data is collate for reporting purposes.
I truly believe sync and migration must be two differrent add-ons. sync would probably depend on migration, which is fine. So here it comes my suggestion: rename senaite.sync to senaite.migration and create a new senaite.sync add-on
Thank you @ronaldm and @xispa for sharing your thoughts.
It seems having two different Add-On’s is reasonable and I am going to work on this very soon.
Therefore, I would rather say that the “migrate” step depends on the data of the “fetch” step than the other way around;)
I agree in splitting this and do the raw data handling in a pluggable and chained manner.
These plugins would behave like a middleware, getting the raw data and the content object on the input and output the modified data and the modified instance on the output (easy;))
With this technique, each Plugin that “migrates” the data can take the output of the last plugin as its input and serve as the input for the next plugins with its output.
Hence, I would suggest to define the “universal” format of the data as the data from the data fetcher. It must be some kind of utility, so that each plugin can be capable to fetch additional data from the source as needed.
I think with that approach we would be most flexible and Plugins can be built when needed and put in like upgrade steps.