Skip to content

greensoftwarelab/RosettaExamples

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Energy Efficiency in Programming Languages, Revisiting Rosetta Code

Checking Energy Consumption in Programming Languages Using Rosetta Code as a case study.

What is this?

This repo contains the source code of 21 distinct tasks (of which 9 were measured for a scientific journal submission), implemented in 21 different languages (exactly as taken from the Rosetta Code).

It also contains tools which provide support, for each benchmark of each language, to 4 operations: (1) compilation, (2) execution, (3) energy measuring and (4) memory detection.

How is it structured and hows does it work?

This framework follows a specific folder structure, which guarantees the correct workflow when the goal is to perform and operation for all benchmarks at once. Moreover, it must be defined, for each benchmark, how to perform the 4 operations considered.

Next, we explain the folder structure and how to specify, for each language benchmark, the execution of each operation.

The Structure

The main folder contains 32 elements:

  1. 21 sub-folders (one for each of the considered tasks); each folder contains a sub-folder for each considered programming language.
  2. A Python script compile_all.py, capable of building, running and measuring the energy and memory usage of every benchmark in all considered languages.
  3. A RAPL sub-folder, containing the code of the energy measurement framework.

Basically, the directories tree will look something like this:

| ...
| <benchmark-1>
	| <Language-1>
		| <source>
		| Makefile
		| [input]
	| ...
	| <Language-i>
		| <source>
		| Makefile
		| [input]
| ...
| <benchmark-i>
	| <Language-1>
	| ...
	| <Language-i>
| RAPL
| compile_all.py

The Operations

Each language sub-folder, included in a task folder, contains a Makefile. This is the file where is stated how to perform the 4 supported operations: (1) compilation, (2) execution, (3) energy measuring and (4) memory detection.

Basically, each Makefile must contains 4 rules, one for each operations:

Rule Description
compile This rule specifies how the benchmark should be compiled in the considered language; Interpreted languages don't need it, so it can be left blank in such cases.
run This rule specifies how the benchmark should be executed; It is used to test whether the benchmark runs with no errors, and the output is the expected.
measure This rule shows how to use the framework included in the RAPL folder to measure the energy of executing the task specified in the run rule.
mem Similar to measure, this rule executes the task specified in the run rule but with support for memory peak detection.

To better understand it, here's the Makefile for the Fibonacci-sequence task in the Rust language:

compile:
	/usr/local/src/rust-1.16.0/bin/rustc -C opt-level=3 -C target-cpu=core2 -C lto -L /usr/local/src/rust-libs fibonacci-sequence-5.rust -o fibonacci-sequence.rust_run

measure:
	sudo ../../RAPL/main "./fibonacci-sequence.rust_run" Rust fibonacci-sequence

run:
	./$(TASK).rust_run

mem:
	/usr/bin/time -v ./$(TASK).rust_run

Running an example.

We included a main Python script, compile_all.py, that you can either call from the main folder or from inside a language folder, and it can be executed as follows:

python compile_all.py [rule]

You can provide a rule from the available 4 referenced before, and the script will perform it using every Makefile found in the same folder level and bellow.

The default rule is compile, which means that if you run it with no arguments provided (python compile_all.py) the script will try to compile all benchmarks.

The results of the energy measurements will be stored in files with the name <language>.csv, where <language> is the name of the running language. You will find such file inside of corresponding language folder.

Each .csv will contain a line with the following:

benchmark-name ; PKG (Joules) ; CPU (J) ; GPU (J) ; DRAM (J) ; Time (ms)

Do note that the availability of GPU/DRAM measurements depend on your machine's architecture. These are requirements from RAPL itself.

Add your own example!

Wanna know your own code's energy behavior? We help you!

Follow this steps:

1. Create a folder with the name of you benchmark, such as test-benchmark, inside the language you implemented it.
2. Follow the instructions presented in the Operations section, and fill the Makefile.
3. Use the compile_all.py script to compile, run, and/or measure what you want! Or run it yourself using the make command.

Further Reading

Wanna know more? Check this website!

There you can find the results of a successful experimental setup using the contents of this repo, and the used machine and compilers specifications.

You can also find there the paper which include such results and our discussion on them:

"Energy Efficiency across Programming Languages: How does Energy, Time and Memory Relate?", Rui Pereira, Marco Couto, Francisco Ribeiro, Rui Rua, Jácome Cunha, João Paulo Fernandes, and João Saraiva. In Proceedings of the 10th International Conference on Software Language Engineering (SLE '17)

IMPORTANT NOTE:

The Makefiles have specified, for some cases, the path for the language's compiler/runner. It is most likely that you will not have them in the same path of your machine. If you would like to properly test every benchmark of every language, please make sure you have all compilers/runners installed, and adapt the Makefiles accordingly.

Contacts and References

Green Software Lab

Main contributors: @Marco Couto and @Rui Pereira

The Computer Language Benchmark Game

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •