<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.memcp.org/index.php?action=history&amp;feed=atom&amp;title=Parallel_Computing</id>
	<title>Parallel Computing - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://www.memcp.org/index.php?action=history&amp;feed=atom&amp;title=Parallel_Computing"/>
	<link rel="alternate" type="text/html" href="https://www.memcp.org/index.php?title=Parallel_Computing&amp;action=history"/>
	<updated>2026-04-12T22:33:46Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.39.1</generator>
	<entry>
		<id>https://www.memcp.org/index.php?title=Parallel_Computing&amp;diff=35&amp;oldid=prev</id>
		<title>Carli: Created page with &quot;Almost 99% of all newly invented are imperative programming languages. But imperative languages have one drawback: their parallelization is hard.  == Drawbacks of Imperative Programming Languages == Imperative programming languages do have one mayor drawback: state. The concept of an imperative language is that commands are executed which change the content of variables or complex objects in the memory. When trying to create an optimizing compiler that from itself finds...&quot;</title>
		<link rel="alternate" type="text/html" href="https://www.memcp.org/index.php?title=Parallel_Computing&amp;diff=35&amp;oldid=prev"/>
		<updated>2024-05-17T11:51:57Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;Almost 99% of all newly invented are imperative programming languages. But imperative languages have one drawback: their parallelization is hard.  == Drawbacks of Imperative Programming Languages == Imperative programming languages do have one mayor drawback: state. The concept of an imperative language is that commands are executed which change the content of variables or complex objects in the memory. When trying to create an optimizing compiler that from itself finds...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;Almost 99% of all newly invented are imperative programming languages. But imperative languages have one drawback: their parallelization is hard.&lt;br /&gt;
&lt;br /&gt;
== Drawbacks of Imperative Programming Languages ==&lt;br /&gt;
Imperative programming languages do have one mayor drawback: state. The concept of an imperative language is that commands are executed which change the content of variables or complex objects in the memory. When trying to create an optimizing compiler that from itself finds parallelizable parts in the code, the compiler has to keep track of data dependencies and the random side effects of each command and function call.&lt;br /&gt;
&lt;br /&gt;
The possibly simplest solution to this problem is to tell the compiler exactly which loops are parallelizable. This however forces the developer to write nearly side-effect-free code. So we decided to go the pure way – to &amp;#039;&amp;#039;&amp;#039;design a programming language that does not allow side-effects.&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
== The Functional World ==&lt;br /&gt;
A &amp;quot;pure&amp;quot; functional programming language is a language where every function will compute its result only and only from its inputs. This builds a great basis for highly parallel map-reduce algorithms like we need in our clusterable in-memory database.&lt;br /&gt;
&lt;br /&gt;
We took the scheme interpreter from Pieter Kelchtermans written in golang and added some extra features:&lt;br /&gt;
&lt;br /&gt;
* We removed the &amp;lt;code&amp;gt;set!&amp;lt;/code&amp;gt; instruction because it is the only function to cause global side effects All other functions are local to the current environment and as long as you don’t change the environment, every piece of code can be run in parallel without affecting each other&lt;br /&gt;
* We made &amp;lt;code&amp;gt;begin&amp;lt;/code&amp;gt; to open its own environment, so self recursion can be done by defining a function in a begin block (&amp;lt;code&amp;gt;!begin&amp;lt;/code&amp;gt; is the scopeless version)&lt;br /&gt;
* We fixed &amp;lt;code&amp;gt;if&amp;lt;/code&amp;gt;&lt;br /&gt;
* We also allowed strings as native datatypes as well as the &amp;lt;code&amp;gt;concat&amp;lt;/code&amp;gt; function which will concatenate all strings to one string&lt;br /&gt;
* We added a serialization mechanism to fully recover values and turn them into valid scheme code again.&lt;br /&gt;
&lt;br /&gt;
 &amp;#039;&amp;#039;&amp;#039;carli@launix-MS-7C51&amp;#039;&amp;#039;&amp;#039;:&amp;#039;&amp;#039;&amp;#039;~/projekte/memcp/server-node-golang&amp;#039;&amp;#039;&amp;#039;$ make&lt;br /&gt;
 go run *.go&lt;br /&gt;
 &amp;gt; 45&lt;br /&gt;
 ==&amp;gt; 45&lt;br /&gt;
 &amp;gt; (+ 1 2)&lt;br /&gt;
 ==&amp;gt; 3&lt;br /&gt;
 &amp;gt; (define currified_add (lambda (a) (lambda (b) (+ a b))))&lt;br /&gt;
 ==&amp;gt; &amp;quot;ok&amp;quot;&lt;br /&gt;
 &amp;gt; ((currified_add 4) 5)&lt;br /&gt;
 ==&amp;gt; 9&lt;br /&gt;
 &amp;gt; (define add_1 (currified_add 1))&lt;br /&gt;
 ==&amp;gt; &amp;quot;ok&amp;quot;&lt;br /&gt;
 &amp;gt; (add_1 6)&lt;br /&gt;
 ==&amp;gt; 7&lt;br /&gt;
 &amp;gt; (add_1 (add_1 3))&lt;br /&gt;
 ==&amp;gt; 5&lt;br /&gt;
 &amp;gt; (define name &amp;quot;Peter&amp;quot;) &lt;br /&gt;
 ==&amp;gt; &amp;quot;ok&amp;quot;&lt;br /&gt;
 &amp;gt; (concat &amp;quot;Hello &amp;quot; name)&lt;br /&gt;
 ==&amp;gt; &amp;quot;Hello Peter&amp;quot;&lt;br /&gt;
 &amp;gt; &lt;br /&gt;
&lt;br /&gt;
== MemCP functions that support parallelism ==&lt;br /&gt;
The following functions support parallelism:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;scan&amp;lt;/code&amp;gt; runs &amp;lt;code&amp;gt;filter&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;map&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;reduce&amp;lt;/code&amp;gt; in parallel for each shard, &amp;lt;code&amp;gt;reduce2&amp;lt;/code&amp;gt; is serial&lt;br /&gt;
* &amp;lt;code&amp;gt;scan_order&amp;lt;/code&amp;gt; runs &amp;lt;code&amp;gt;filter&amp;lt;/code&amp;gt; as well as the sorting in parallel and &amp;lt;code&amp;gt;map&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;reduce&amp;lt;/code&amp;gt; in serial&lt;br /&gt;
* &amp;lt;code&amp;gt;parallel&amp;lt;/code&amp;gt; evaluates each given parameter in parallel and continues if all jobs are done&lt;br /&gt;
* &amp;lt;code&amp;gt;newsession&amp;lt;/code&amp;gt; is a threadsafe key-value store to share context over threads&lt;br /&gt;
* &amp;lt;code&amp;gt;once&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;mutex&amp;lt;/code&amp;gt; help to synchronize control flow&lt;br /&gt;
&lt;br /&gt;
You can read the manual by typing &amp;lt;code&amp;gt;(help &amp;quot;scan&amp;quot;)&amp;lt;/code&amp;gt; in the scheme console.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
What did we achieve?&lt;br /&gt;
&lt;br /&gt;
* We chose scheme to be our language of choice&lt;br /&gt;
* We stripped away those parts from scheme that make it unsafe for parallel computing&lt;br /&gt;
* We added some useful functions to scheme to fit our needs (string processing, parallelization primitives…)&lt;br /&gt;
* We implemented a serialization function that can recreate scheme code from memory objects that can be loaded on other machines&lt;br /&gt;
* Now we can start implementing our highly-parallel map-reduce algorithms that can take map and reduce lambda-functions, execute them in parallel and enjoy the highly parallel result&lt;/div&gt;</summary>
		<author><name>Carli</name></author>
	</entry>
</feed>