" Input: { (data_uoa) - pipeline module UOA (pipeline) - prepared pipeline setup (already ready to run) or (pipeline_from_file) - load prepared pipeline setup from file (pipeline_update) - update pipeline with this dict (useful to update already prepared pipeline from file) (pipeline_flags) - update pipeline directly from the CMD flags (will be parsed by CK) (iterations) - limit number of iterations, otherwise infinite (default=10) if -1, infinite (or until all choices are explored) (start_from_iteration) - skip all iterations before this number (repetitions) - statistical repetitions (default=4) (seed) - if !='', use as random seed (to reproduce experiments) Enforce exploration: (start) (stop) (step) (explore_type) = random, parallel-random, loop, parallel-loop Note: machine learning-based or customized autotuners are now moved to external plugins (see "custom_autotuner" vars below) or (random) (parallel-random) (loop) (parallel-loop) (process_multi_keys) - list of keys (starts with) to perform stat analysis on flat array, by default ['##characteristics#*', '##features#*' '##choices#*'], if empty, no stat analysis (record) - if 'yes', record results (record_uoa) - (data UOA or CID where module_uoa ignored!) explicitly record to this entry (record_repo) - (repo UOA) explicitly select this repo to record (record_experiment_repo) - (repo UOA) explicitly select this repo to record (record_failed) - if 'yes', record even failed experiments (for debugging, buildbots, detecting designed architecture failures, etc) (record_only_failed) - if 'yes', record only failed experiments (useful to crowdsource experiments when searching only for compiler/program/architecture bugs (for example fuzzing via random compiler flags))... (record_permanent) - if 'yes', mark recorded points as permanent (will not be deleted by Pareto filter) (record_ignore_update) - (default=yes), if 'yes', skip recording date/author info for each update (tags) - record these tags to the entry description (subtags) - record these subtags to the point description (skip_record_pipeline) - if 'yes', do not record pipeline (to avoid saving too much stuff during crowd-tuning) (skip_record_desc) - if 'yes', do not record desc (to avoid saving too much stuff during crowd-tuning) (record_params) - extra record parameters (to 'add experiment' function) (features_keys_to_process) - list of keys for features (and choices) to process/search when recording experimental results (can be wildcards) by default ['##features#*', '##choices#*', '##choices_order#*'] (frontier_keys) - list of keys to leave only best points during multi-objective autotuning (multi-objective optimization) (frontier_keys_reverse) - list of values associated with above keys. If True, reverse sorting for a give key (by default descending) (frontier_margins) - list of margins when comparing values, i.e. Vold/Vnew < this number (such as 1.10 instead of 1). will be used if !=None (frontier_features_keys_to_ignore) - list of keys to ignore from 'features_keys_to_process' when detecting subset of points to detect frontier (usually removing optimization dimensions, such as compiler flags) (only_filter) - if 'yes', do not run pipeline, but run filters on data (for Pareto, for example) (skip_stat_analysis) - if 'yes', just flatten array and add #min (stat_flat_dict) - pre-load flat dict from previous experiments to aggregate for stat analysis (features) - extra features (meta) - extra meta (record_dict) - extra dict when recording experiments (useful to set subview_uoa, for example) (state) - pre-load state preserved across iterations (save_to_file) - if !='', save output dictionary to this file (skip_done) - if 'yes', do not print 'done' at the end of autotuning (sleep) - set sleep before iterations ... (force_pipeline_update) - if 'yes', re-check pipeline preparation - useful for replay not to ask for choices between statistical repetitions (ask_enter_after_each_iteration) - if 'yes', ask to press Enter after each iteration (tmp_dir) - (default 'tmp') - if !='', use this tmp directory to clean, compile and run (flat_dict_for_improvements) - add dict from previous experiment to compare improvements (pause_if_fail) - if pipeline fails, ask to press Enter (useful to analyze which flags fail during compiler flag autotuning) (pause) - if 'yes', pause after each iteration (aggregate_failed_cases) - if pipeline fails, aggregate failed cases (to produce report during crowdtuning or automatic compiler bug detection) (solutions) - check solutions (ref_solution) - if 'yes', choose reference solution from above list (internal_ref) - if 'yes', try internal reference before checking solutions ... (prune) - prune solution (find minimal choices that give the same result) (reduce) - the same as above (reduce_bug) - reduce choices to localize bug (pipeline fail) (prune_ignore_choices) - list of choices to ignore (such as base flag, for example) (prune_md5) - if 'yes', check if MD5 doesn't change (prune_invert) - if 'yes', prune all (switch off even unused - useful for collaborative machine learning) (prune_invert_add_iters) - if 'yes', add extra needed iterations (prune_invert_do_not_remove_key) - if 'yes', keep both on and off keys (to know exact solution) (prune_result_conditions) - list of extra conditions to accept result (variation, performance/energy/code size constraints, etc) (print_keys_after_each_iteration) - print values of keys from flat dict after each iteration (to monitor characteristics) (result_conditions) - check results for condition (condition_objective) - how to check results (#min,#max,#exp ...) (collect_all) - collect all experiments and record to (custom_autotuner) - dictionary to customize autotuner (exploration, DSE, machine learning based tuning, etc) (custom_autotuner_vars) - extra vars to customize autotuner (for example, set default vs. random) (preserve_deps_after_first_run) - if 'yes', save deps after first run (useful for replay) } Output: { return - return code = 0, if successful > 0, if error (error) - error text if return > 0 last_iteration_output - output of last iteration last_stat_analysis - flat dict with stat analysis experiment_desc - dict with experiment description recorded_info - {'points':{}, 'deleted_points':{}, 'recorded_uid'} (failed_cases) - failed cases if aggregate_failed_cases=yes (solutions) - updated solutions with reactions to optimizations (needed for classification of a given computing species) (all) - if i['collect_all']=='yes', list of results of all iterations [{experiment_desc ..},...] } "