Homework 1:
Due Wed. Jan 26, 11:59pm
- Pairwise Summation: Reformulate the pair-wise summation program to solve the Count 3s computation assuming that n = 1024, but P =8.
- Parallel Prefix: Use a random number generator to generate a list of 16 integers. Use the parallel prefix algorithm to create a new sequence of "max prefixes" where each prefix is the max of its predecessors in the list. Build the tree structure and execute the parallel prefix algorithm performing the sweep up the tree and the sweep back down the tree. What value flows "into the root" if the algorithm is to work with signed numbers?
- Count 3s Implementation:
The goal of this problem is to confirm or refute the results from the textbook regarding the performance (and correctness) of the sequence of solutions to the "Count 3s" problem. The end result from this problem will be both a program and a short report describing and graphing the performance results from executing the program.
For the program, I want to see a single multithreaded (POSIX pthreads) program written in C that can execute all five versions of the given count 3s solutions: (0) sequential version, (1) incorrect race conditon initial version, (2) mutex solution version correcting (1), (3) thread-local counter version, waiting to combine at end, and (4) thread-local counter separated on different cache lines version.
The program should use command-line arguments to control its operations:- -v: implementation version (0, 1, 2, 3, 4)
- -p: number of processors (threads)
- -n: number of integers in array
- -d: distribution percentage of 3's in the generated array
The program should begin by generating an array of integers with the requisite percentage of 3's in it. It should then start a timer and invoke the count 's threads. There should be a join operation back with the main thread so that, once all threads have completed, the main thread can take another time sample and report the elapsed time for the program, as well as reporting both the final sum of 3's counted by the threads and the correct answer for the number of 3s in the array.
For the analysis, you should run enough variations so that you can confirm or refute the results from the book, and it is up to you to determine how you present your results. The Note on page 24 of the text articulates the methodology used in the book to generate their numbers.
Any additional observations on the effects of the size of the array and the distribution percentage of 3s and their effects on the different versions would complete the analysis.
Homework 2:
Due Fri. Feb. 3, 1:30pm (class time)
- CTA Parallel Summation: What is the communication cost predicted by CTA (expressed in terms of lambda) for four processors adding 1024 numbers, assuming a single variable, sum, allocated to one processor's memory? You may assume the data is evenly split among the four processors. This is analagous to our Try2 of the Count3s problem from Chapter 1.
Repeat the above, but with an algorithm in which each processor keeps a local sum and combination occurs at the end. (Like Try4).
- Parallel Balanced Expression Checking:
Suppose we have an input of a string expression composed of a linear sequence of the three tokens: '(', 'x', and ')'. So a simple input string example might have the value '((xxx)x(x)(xx))'. While this is a generalization, such strings have real-world counterparts in expressions in programs, or in thinking about the construction of XML trees, which, when valid, are made up of well-nested, user-defined tags.
Let n be the length of our input string and P be the number of processors, and assume n >> P. Your task is to design a parallel algorithm that ultimately yields either True or False, according to the prediate asking the question of whether the original input string is well-formed. That is, the parenthesis are properly nested and each begin-paren has a corresponding end-paren.
Making only the assumptions of the CTA model, describe through expository text and pseudo-code the logic of a parallel algorithm to perform this task. Your solution should include the data allocation in the CTAs local memories, what communcation is required to/from non-local memory, and the protocol for processor interactions, etc.
Homework 3:
Due Mon. Feb. 21, 11:59pm
- Parallel Prefix Sum:
The goal of this problem is to implement parallel prefix using MPI on our Beowulf cluster. Like with our count3s implementation, our objective is to get familiar with the programming environment and to gather some performance information on our implementation. So we will have one week for this homework.
For the program, I want execution to start with the rank 0 process on tashi. The process should take two command line arguments beyond any MPI arguments: the name of an input file on tashi, and a "method" argument. The number of "worker" processes should be determined by the -np parameter in the mpiexec execution. You may assume a number of worker processes equal to a power of 2, so with tashi as the coordinator (and not participating in the work), the np parameter may have values 2, 5, 17, 33, or 65 without oversubscribing our cluster.
Program command-line arguments:- -f `filename': filename contains the numbers to use in parallel prefix. The format of the file begins with an integer giving the number of integers to follow, then followed by the numbers themselves. You will need an auxiliary program to generate files in this form. You can/should generate files where n, the number of integers in the file, is a power of 2.
- -v `method': where method can be 0, 1 and perhaps 2. Method 0 means a distributed, but sequential, algorithm where each process is responsible for computing their respective n/p prefixes, but uses point-to-point communication to get the sum from their predecessor neighbor, and passing to their successor neighbor. Method 1 means to build a binary tree and to use the parallel prefix upsweep and downsweep with point-to-point communication between child and parent nodes. Method 2 means to explore the MPI collective communication and use one or more MPI calls to perform the upsweep and downsweep.
The program should begin by having the coordinator read from the specified file and sending the portions of the number list to the worker nodes. All processes should proceed through a barrier at which point the start of execution time may be recorded. The algorithm (based on the method) may then commence. Once all prefixes are computed, I want the workers to send their portion of the computed prefixes back to the coordinator, who should write the results out to a file. That way we can check for correctness. The output filename may be generated from the basename of the input filename with a '.out' extension.
I would like to see two elapsed times reported for each execution -- one from the start to the point where all processes have computed their prefixes, and one from the start to the point where all prefix subarrays have been received at the coordinator.
You should included a short write-up giving your observations on the results of executing this program over different n and p and method.
Homework 4:
Due Mon. Mar. 7, 11:59pm
- Red/Blue Simulation:
Consider an n x n matrix that serves as the board for the simulation. The Red/Blue simulation computes two interactive flows on the board that progress through rounds. The board is initialized so that entries, called cells, have one of three colors: red, white, and blue. White indicates a cell is empty. Red cells try to move to the right, if there is an empty cell in that position, while blue cells try to move down, if there is an empty cell below them. The board should be considered a torus, in that the bottom row "wraps around" to the top, and the rightmost column wraps around to the leftmost column.
Each round consists of two half-steps. In the first half-step, any red cell can move right into an unoccupied cell. Note that if the board had red-red-white in a trio of cells in a row, both reds could "move" to the right, with evaluation proceeding from right to left in the computation of the half-step. In the second half-step, any blue cell can move down into an unoccupied cell. Evaluation in this half-step is from bottom to top. Note that the case where a red vacates a cell in the first half and a blue moves into it in the second half is ok.
The simulation terminates when, viewing the board as overlaid with t x t tiles, and t divides n, _any_ tile's colored squares are more than c% one color.
Input parameters to the program:
-n: size of one side of the board
-t: size of a tile side
-m: maximum number of rounds (in case the c% threshold is not reached, and for debugging)
-c: termination threshold (as an integer)
You are welcome to add other parameters to control initial board configuration (from a file, and/or by generating random values for red and blue from some initial percentage). Note that you will want to have some mechanism for convincing yourself (and me) that your program is doing the "correct" thing. - Write a Peril-L solution to the Red/Blue simulation.
- Write an MPI program to solve the Red/Blue simulation.
Homework 5:
Due Wed. Apr. 13, 11:59pm
CUDA Matrix Multiplication
- Complete Lab/Practicum implementing basic Matrix Multiplication
Matrix Multiplication of two n x n matrices M and N resulting in product matrix P is defined in terms of the (i,j)th element of P. P_{ij} = Sum(M_{ik} * N_{k,j}) for k from 1 to n. So the (i,j)th element of P is the dot product of the ith row of M with the jth column of N.
In a naive implementation of matrix multiplication in CUDA, we can define a Grid as a two-dimensional representation of the product matrix, and each block in the grid responsible for computing a singleton element. All three n x n matrices reside in global device memory.
Assumptions:
- All matrices are square and of dimension n.
- Data type of elements is float.
- Goal is to generate and optionally output the product matrix P.
- Should be able to either randomly generate M and N or to read M and N from files.
Input parameters to the program:
-n: size of one side of any of the three matrices; if n is specified and neither M nor N are specified, generate M and N randomly
-M: filename for first input matrix
-N: filename for second input matrix
-p: if present, output resultant matrix P
-o: filename for output matrix P, otherwise use stdout if -p specified
-t: perform timing measurements of the matrix multiplication operation
-v: version: 0 is 1 thread per block, 1 is the optized shared-memory version below
You are welcome to add other parameters to control configuration. For instance, you might use -T<value> to specify tile size in the optimized version.
- Extend CUDA Matrix Multiply to use Shared Memory
We know from the CUDA architecture and from our case study of the implementation of the reduce operation in CUDA that we can improve performance through shared memory among threads in a block. In this extension, we will define our product matrix to consist of a tiling of blocks (T x T), and will assign the computation of a tile to a thread block.
If we look carefully at the set of global values required for the dot-product computation involved in a tile, we see that the same global values get referenced multiple times in the compuation of the individual dot-products. This is the key to the shared memory optimization. First see what values get referenced within a tile. Then divide the kernel into two phases (much like we did for reduction). In the first phase, we apportion the values required amongst the threads and have them bring these values into shared memory once. In the second phase, the threads can use these shared memory "cached" values to compute their individual dot-ptoducts.
- Performance Analysis of Speedup
Analyze the benefits of your solution and compute the speedup of the optimized version against the unoptimized version as we scale both n and T.
For additional points here, also compare against a sequential solution.
Homework 6:
Due Mon. May. 2, 11:59pm
MovieLens Dataset Exercises:
- Each rating in the ratings.dat dataset is accompanied by a timestamp. Recall that the timestamp measures, in seconds, the time since Januar 1, 1970 (known in Unix systems as the "epoch"). Design a MapReduce application that will count the nuimber of ratings per date. So the output will be a list of individual dates and the integer value assoicated with each date is the number of ratings submitted on that date, regardless of movie or of user.
- Suppose that I want a histogram of how many of each rating are in the database, so that I can see the distribution between amongst low to high ratings. Design a MapReduce application to give counts of each of the integral ratings between 0 and 5.
Gutenberg Books Dataset Exercises:
- We want to create a word based index for the Complete Works of Shakespeare. So our desired output is a list of words, where each word is annotated with with a list of line-numbers where that word appears. Create a MapReduce application that performs this LineIndexing function. Ultimately, the list of line-numbers should not include repeats and should be in ascending order.
- Create the "standard" word count MapReduce application. Use a combiner function to optimize/reduce the communication from a Mapper to the Reducers. Run the word count example on the set of works by Charles Dickens provided and report the 5 most often used words by Dickens.