After just recently getting DCP working on my amazon network I've been taking small steps to bring my initial example around to something more useful. If anyone is trying this at home the attached files worked for me. Put the pipeline, images, the "csv" list of images and metadata on aws S3. Put the config.py, job.json and fleet.json files on the "CP Control node". The config.py, job.json and fleet.json have aws specific information that will have to be specified for your aws network. I used the examples provided on the DCP instance to get started. sorry my zip has the same name as the example provided on the examples page.
ExampleSBSImages.zip (4.4 MB)
Just as a side note it is not necessary to have permissions on s3 enabled for "Everyone" or "Any authenticated AWS user". In addition to the configuration suggested for the ecsInstanceRole and the aws-ec2-spot-fleet-role I have added AmazonS3FullAccess. I haven't tried to remove that and see if I can get an analysis to run.
With the above analysis all the images are in one folder (images) and the output for each analyzed image goes into it's own folder in the output folder (you may need to make an empty folder on aws before starting). This output works for me, each image is analyzed separately and output to it's own csv file.
Ok, here are my questions:
When I run the above analysis a "Spot Fleet" of two instances are started in addition to the one instance started that is associated with the service/task in the cluster (newcluster in this case). This instance is visible in the EC2 container service. The spot fleet instances don't appear to be doing anything and are terminated when the monitor winds the analysis down. How should I be configuring/organizing/initiating the spot fleet so it actually does something?
What would be convenient would be if I could have multiple input folders of images. For example, an experiment with multiple plates would have a folder of images for each one. I don't want to "group" the analysis so all the data from a plate goes into one csv or for illumination correction. I can live with separate folders for each analyzed image. but it would be nice not to have to one big input folder with all my images from one screening experiment. I've tried a bunch of different things to get this to work and haven't been able to do it. It seemed like this should have worked, but it did not:
test_06.zip (4.4 MB)
My last question at the moment is if the CP group can change and post new docker images which get incorporated when fab or config is run how can I avoid having a working system stop working if a change/update is made that isn't for the better?
Thanks for your time! John