Group in CellProfiler does mean that every group is treated separately, but CellProfiler doesn't stop and create the output CSVs at the end of each group, it either writes the info for all groups at the end (if you're using ExportToSpreadsheet) or writes it for each image as it goes along (in ExportToDatabase). There's not an in-between behavior from the GUI.
On the command line, you can run each group at a time- here's a simplified version of a cellprofiler command I ran today-
cellprofiler -c -r -b -p path_to_pipeline/segment_tracking_masking_centAWS.cppipe -i inputpath/input/ -o outputpath/Movie1-201 -d /outputpath/Movie1-201/cp.is.done --data-file=path_to_csv/load_data_csv_Movie1.csv -g Metadata_MovieName=Movie1,Metadata_Timepoint=201
This command ran my pipeline on only Movie1, Timepoint 201 (even though the CSV had the images for the whole Movie1 in it)- and while it uses LoadData rather than the first 4 input modules, you can make the CSV (as I did for this example) by using the 4 input modules first as a separate pipeline, telling CellProfiler to export a nice CSV with all my Channels, Metadata, Groupings, etc configured, then just using that CSV in LoadData in my main analysis pipeline that I call from the command line.
So, with all that said, you could definitely write yourself a script that iterates over your metadata and calls CellProfiler on the command line a certain number of times and waits for the output. It will likely be slower than running it from the GUI, since running from the command line only utilizes one CPU at a time (whereas the GUI knows to distribute the work across as many CPUs as you allow it), but it's reasonably straightforward.
I'm likely biased because I helped a lot on the cloud based solution, but if you're at the level where you can write the script to call CP from the command line, you're likely at a level where you CAN use that too if you want. I definitely don't come from a CS background AT ALL , nor does anyone on the biology sub-team I work on at CP, and so we've very intentionally written the tool and the wiki for it so it's aimed at 'biologist who has too many images to process but doesn't necessarily know much about computers'. By all means try doing it locally first, but if that becomes onerous that's my suggestion for a backup plan.