• ### I'm not wrong, you're just slighty right...

One of the things that might be apparent to the discerning eye is that this program will actually test numbers that aren't prime (including numbers that are multiples and factors of primes). This isn't much of a concern. With the use of the multiplicity ("P"^"M"), those numbers that are composites of a prime will automatically fail the even divisibility test. "P" will just keep increasing till is gets to a number that evenly divides "R". Since all other composites (both lesser and greater than "P") are taken care of by dividing "P" to the power of a multiplicity ("M"), it really wouldn't matter whether "P" is prime or not.

Any number that can't evenly divide "R" falls through the sieve and the next number is tested. Only when "P" is prime and evenly divides "R" (with respect to a multiplicity "M") will the number matter and will catch to reduce "R". This makes sure that prime numbers, and only prime numbers, reduce "R" till it equals 1 so that the program can terminate.

Even with such testing, the runtime of the algorithm would be something of the order of... Where c is the constant that represent the amount of operations that are required to test "P", even when it's not prime, for the input number n (the "N" in the program). I may be wrong in my assessment (I would really love for someone to proven me wrong), but I believe this means that my program represents an algorithm that can conduct prime factorization in polynomial time. Slightly modified, it could work with larger numbers. Even difficult composite numbers that are composed by co-primes would fall sway to it.

Which is the main reason for me posting it here. I would love for someone to prove me wrong or help me in figuring out how to better allow it to handle larger numbers without taking too much away from the algorithm itself.

• ### The program is acasting...

This is the program. It makes much use (or rather abuses heavily) casting to properly deal with the numbers it has to work with. The only Achilles' heel to the entire program is the limitation of numerical precision. It tends to fall apart (or just plain takes too long for my taste) large numbers. This however is only a limitation of the computer's hardware and not of the program itself.

```#include <iostream>
#include <cmath>
using namespace std;

int main() {

char check = 'y';

while (check == 'y') {

int P = 2; int D = 0; int M = 0; int R = 0; int N = 0;

cout << "Enter a number:";
cin >> N;
cout << endl;
R = N;

while ((P < (N + 1)) && (R >= P)) {

M = 0;

if (((R % int(pow(double(P), double(M + 1)))) == 0) && ((R / int(pow(double(P), double(M + 1)))) >= 1)) {
while (((R % int(pow(double(P), double(M + 1)))) == 0) && ((R / int(pow(double(P), double(M + 1)))) >= 1)) {
M++ ; D++;
}

R = (R / int(pow(double(P), double(M))));
cout << P << " is a factor of " << N << " with a multiplicity of " << M << ". \n";
}
P++;
}
if (D == 1) {
cout << "Which means " << N << " is a prime number. \n" ;
}

cout << "Again (y/n)?" ;
cin >> check ;
cout << endl;
}
return 0;
}

```