Very good Mr Brothir, what a delicious specimen you are. Now for you to put in the sequence of operations and monitor it's outcomes requires the input of your own moral agency. You yourself would put in your moral framework so that it would do calculation to benefit all of man. What it seems to me on what Jordan is saying is that you still are applying your own pirori morality for the scientific tool of mathematics to solve as many aggregations as possible for the benefit of humanity. Before you evaluate you have to see that your experiment justifies humanity as an end and not a means. You as a human being cannot physically go through all those outcomes all at once so you would need an AI tech of sorts to do the job. However if that AI tech approaches the likes of a singularity then what outcome could that have? Would your own morality be sufficient enough to prevent an unseen catastrophe from the AI itself wherein it's own creation tethers to mans existence etc? It is fun to think about.