# These notes are intended for use by students in cs1501 at the University of Pittsburgh and no one else

 Page 14/24 Date 03.05.2017 Size 186.06 Kb. #18863

## Integer Multiplication

• Multiply our integers the way we learn to multiply in school
• However, we are using base 2 rather than base 10
• See example on board
• Run-time of algorithm?
• How to implement?
• We need to be smart so as not to waste space
• The way we learn in school has "partial products" that can use Theta(N2) memory
• We have two nested loops:
• Outer loop goes through each bit of first operand
• Inner loop goes through each bit of second operand
• Total runtime is Theta(N2)

## Integer Multiplication

• function Mult (X[0..n-1], Y[0..n-1]) returns Z[0..2n-1]
• // note: X and Y have n digits Z has UP TO (2n) digits
• // The LEAST significant digits (smaller powers of ten)
• // stored in the smaller indices
• long S = 0
• for j from 0 to (2n-1) do // go through digits of result
• for i from 0 to (n-1) do // digits of X
• if (0 <= (j-i) <= (n-1)) then
• S = S + X[i] * Y[j-i]
• Z[j] = S mod 10 // remainder to get curr digit
• S = S / 10 // integer division for carry
• return Z
• end function

## Integer Multiplication

• Can we improve on Theta(N2)?
• How about if we try a divide and conquer approach?
• Let's break our N-bit integers in half using the high and low order bits:
• X = 1001011011001000
• = 2N/2(XH) + XL
• where XH = high bits = 10010110
• XL = low bits = 11001000

## More Integer Multiplication

• Let's look at this in some more detail
• Recall from the last slide how we break our N-bit integers in half using the high and low order bits:
• X = 1001011011001000
• = 2N/2(XH) + XL
• where XH = high bits = 10010110
• XL = low bits = 11001000
• Given two N-bit numbers, X and Y, we can then re-write each as follows:
• X = 2N/2(XH) + XL
• Y = 2N/2(YH) + YL

## More Integer Multiplication

• Now, the product, X*Y, can be written as:
• XY = (2N/2(XH) + XL)*(2N/2(YH) + YL)
• = 2NXHYH + 2N/2(XHYL + XLYH) + XLYL
• Note the implications of this equation
• The multiplication of 2 N-bit integers is being defined in terms of
• 4 multiplications of N/2 bit integers
• Some additions of ~N bit integers
• Some shifts (up to N positions)
• But what does this tell us about the overall runtime?

## Recursive Run-time Analysis

• How to analyze the divide and conquer algorithm?
• Analysis is more complicated than iterative algorithms due to recursive calls
• For recursive algorithms, we can do analysis using a special type of mathematical equation called a Recurrence Relation
• Idea is to determine two things for the recursive calls
• How much work is to be done during the current call, based on the current problem size?
• How much work is "passed on" to the recursive calls?

## Recursive Run-time Analysis

• Let's examine the recurrence relation for the divide and conquer multiplication algorithm
• We will assume that the integers are divided exactly in half at each recursive call
• Original number of bits must be a power of 2 for this to be true
• Work at the current call is due to shifting and binary addition. For an N-bit integer this should require operations proportional to N
• Work "passed on" is solving the same problem (multiplication) 4 times, but with each of half the original size
• XHYH , XHYL , XLYH , XLYL

## Recursive Run-time Analysis

• So we write the recurrence as
• T(N) = 4T(N/2) + Theta(N)
• Or, in words, the operations required to multiply 2 N-bit integers is equal to 4 times the operations required to multiply 2 N/2-bit integers, plus the ops required to put the pieces back together
• If we emphasized analysis in this course, we would now learn how to solve this recurrence
• We want a Theta run-time in direct terms of N – i.e. we want to remove the recursive component
• There are a number of techniques that can accomplish this

## More Integer Multiplication

• However, for our purposes we will just state that the solution is Theta(N2)
• Note that this is the SAME run-time as Gradeschool
• Further, the overhead for this will likely make it slower overall than Gradeschool
• So why did we bother?

## More Integer Multiplication

• Karatsuba's Algorithm
• If we can reduce the number of recursive calls needed for the divide and conquer algorithm, perhaps we can improve the run-time
• How can we do this?
• Let's look at the equation again
• XY = 2NXHYH + 2N/2(XHYL + XLYH) + XLYL
• (M1) (M2) (M3) (M4)
• Note that we don't really NEED M2 and M3
• All we need is the SUM OF THE TWO, M2 + M3
• If we can somehow derive this sum using only one rather than two multiplications, we can improve our overall run-time