Czech National Team

Týmové fórum
Právě je pát 15 pro, 2017 05:19

Všechny časy jsou v UTC + 1 hodina




Odeslat nové téma Odpovědět na téma  [ Příspěvků: 31 ]  Přejít na stránku 1, 2  Další
Autor Zpráva
 Předmět příspěvku: About NFS@Home in more detail
PříspěvekNapsal: pon 17 pro, 2012 14:47 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
The goal of NFS@Home is to factor large numbers using the Number Field Sieve algorithm. After setting up two polynomials and various parameters, the project participants "sieve" the polynomials to find values of two variables, called "relations," such that the values of both polynomials are completely factored. Each workunit finds a small number of relations and returns them. Once these are returned, the relations are combined together into one large file then start the "postprocessing." The postprocessing involves combining primes from the relations to eliminate as many as possible, constructing a matrix from those remaining, solving this matrix, then performing square roots of the products of the relations indicated by the solutions to the matrix. The end result is the factors of the number.

NFS@Home is interested in the continued development of open source, publicly available tools for large integer factorization. Over the past couple of years, the capability of open source tools, in particular the lattice sieve of the GGNFS suite and the program msieve, have dramatically improved.

Integer factorization is interesting both mathematical and practical perspectives. Mathematically, for instance, the calculation of multiplicative functions in number theory for a particular number require the factors of the number. Likewise, the integer factorization of particular numbers can aid in the proof that an associated number is prime. Practically, many public key algorithms, including the RSA algorithm, rely on the fact that the publicly available modulus cannot be factored. If it is factored, the private key can be easily calculated. Until quite recently, RSA-512, which uses a 512-bit modulus (155 digits), was used. As recently demonstrated by factoring the Texas Instruments calculator keys, these are no longer secure.

NFS@Home BOINC project makes it easy for the public to participate in state-of-the-art factorizations. The project interests is to see how far we can push the envelope and perhaps become competitive with the larger university projects running on clusters, and perhaps even collaborating on a really large factorization.

The numbers are chosen from the Cunningham project. The project is named after Allan Joseph Champneys Cunningham, who published the first version of the factor tables together with Herbert J. Woodall in 1925. This project is one of the oldest continuously ongoing projects in computational number theory, and is currently maintained by Sam Wagstaff at Purdue University. The third edition of the book, published by the American Mathematical Society in 2002, is available as a free download. All results obtained since the publication of the third edition are available on the Cunningham project website.

Concerning target size there are four sievers applications available (http://escatter11.fullerton.edu/nfs/prefs.php?subset=project:):

lasieved - app for RSALS subproject, uses less than 0.5 GB memory: yes or no
lasievee - work nearly always available, uses up to 0.5 GB memory: yes or no
lasievef - used for huge factorizations, uses up to 1 GB memory: yes or no
lasieve5f - used for huge factorizations, uses up to 1 GB memory: yes or no

How's the credit distributed per target wu?

lasieved - 36
lasievee - 44
lasievef - dead
lasieve5f - 130

Why the difference in credits?

The more valuable calculation gets more credit. Especially for 16e (lasievef+lasieve5f), the extra credit also awards for the large memory usage.

What project uses what application?

lasieved - Oddperfect, n^n+(n+1)^(n+1), Fibonacci, Lucas, Cunningham, Cullen and Woodall for SNFS difficulty below 250.
lasievee - Cunningham, Oddperfect or other for SNFS difficulty above 250 to ~280.
lasievef - dead
lasieve5f - push the state of art for very difficulty factorizations, above SNFS difficulty of 280

The limits depends upon the boundaries chosen for the poly and characteristics of the number being factored. It's advanced math related.

For a (much) more technical description of the NFS, see the Wikipedia article or Briggs' Master's thesis.


Naposledy upravil Carlos Pinho dne ned 19 črc, 2015 20:19, celkově upraveno 1

Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Number Field Sieve (NFS) References
PříspěvekNapsal: pon 17 pro, 2012 17:15 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
From msieve readme.txt file:

Citace:
Matthew Briggs' 'An Introduction to the Number Field Sieve' is
a very good introduction; it's heavier than C&P in places
and lighter in others

Michael Case's 'A Beginner's Guide to the General Number Field
Sieve' has more detail all around and starts to deal with
advanced stuff

Per Leslie Jensen's thesis 'Integer Factorization' has a lot of
introductory detail on NFS that other references lack

Peter Stevenhagen's "The Number Field Sieve" is a whirlwind
introduction the algorithm

Steven Byrnes' "The Number Field Sieve" is a good simplified
introduction as well.

Lenstra, Lenstra, Manasse and Pollard's paper 'The Number Field
Sieve' is nice for historical interest

'Factoring Estimates for a 1024-bit RSA Modulus' should be required
reading for anybody who thinks it would be a fun and easy project to
break a commercial RSA key.

'On the Security of 1024-bit RSA and 160-bit Elliptic Curve
Cryptography' is a 2010-era update to the previous paper

Brian Murphy's thesis, 'Polynomial Selection for the Number Field
Sieve Algorithm', is simply awesome. It goes into excruciating
detail on a very undocumented subject.

Thorsten Kleinjung's 'On Polynomial Selection for the General Number
Field Sieve' explains in detail a number of improvements to
NFS polynomial selection developed since Murphy's thesis.

Kleinjung's latest algorithmic ideas on NFS polynomial selection
are documented at the 2008 CADO Factoring Workshop:
http://cado.gforge.inria.fr/workshop/abstracts.html

Jason Gower's 'Rotations and Translations of Number Field Sieve
Polynomials' describes some very promising improvements to the
polynomial generation process. As far as I know, nobody has actually
implemented them.

D.J. Bernstein has two papers in press and several slides on
some improvements to the polynomial selection process, that I'm
just dying to implement.

Aoki and Ueda's 'Sieving Using Bucket Sort' described the kind of
memory optimizations that a modern siever must have in order to
be fast

Dodson and Lenstra's 'NFS with Four Large Primes: An Explosive
Experiment' is the first realization that maybe people should
be using two large primes per side in NFS after all

Franke and Kleinjung's 'Continued Fractions and Lattice Sieving' is
the only modern reference available on techniques used in a high-
performance lattice siever.

Bob Silverman's 'Optimal Parametrization of SNFS' has lots of detail on
parameter selection and implementation details for building a line
siever

Ekkelkamp's 'On the amount of Sieving in Factorization Methods'
goes into a lot of detail on simulating NFS postprocessing

Cavallar's 'Strategies in Filtering in the Number Field Sieve'
is really the only documentation on NFS postprocessing

My talk 'A Self-Tuning Filtering Implementation for the Number
Field Sieve' describes research that went into Msieve's filtering code.

Denny and Muller's extended abstract 'On the Reduction of Composed
Relations from the Number Field Sieve' is an early attempt at NFS
filtering that's been almost forgotten by now, but their techniques
can work on top of ordinary NFS filtering

Montgomery's 'Square Roots of Products of Algebraic Numbers' describes
the standard algorithm for the NFS square root phase

Nguyen's 'A Montgomery-Like Square Root for the Number Field Sieve'
is also standard stuff for this subject; I haven't read this or the
previous paper in great detail, but that's because the convetional
NFS square root algorithm is still a complete mystery to me

David Yun's 'Algebraic Algorithms Using P-adic Constructions' provided
a lot of useful theoretical insight into the math underlying the
simplex brute-force NFS square root algorithm that msieve uses


Decio Luiz Gazzoni Filho adds:

The collection of papers `The Development of the Number Field
Sieve' (Springer Lecture Notes In Mathematics 1554) should be
absolutely required reading -- unfortunately it's very hard to get
ahold of. It's always marked `special order' at Amazon.com, and I
figured I shouldn't even try to order as they'd get back to me in a
couple of weeks saying the book wasn't available. I was very lucky to
find a copy available one day, which I promptly ordered. Again, I
cannot recommend this book enough; I had read lots of literature on
NFS but the first time I `got' it was after reading the papers here.
Modern expositions of NFS only show the algorithm as its currently
implemented, and at times certain things are far from obvious. Now
this book, being a historical account of NFS, shows how it progressed
starting from John Pollard's initial work on SNFS, and things that
looked out of place start to make sense. It's particularly
enlightening to understand the initial formulation of SNFS, without
the use of character columns.
[NOTE: this has been reprinted and is available from bn.com, at least -JP]

As usual, a very algebraic and deep exposition can be found in Henri
Cohen's book `A Course In Computational Algebraic Number Theory'.
Certainly not for the faint of heart though. It's quite dated as
well, e.g. the SNFS section is based on the `old' (without character
columns) SNFS, but explores a lot of the underlying algebra.

In order to comprehend NFS, lots of background on algebra and
algebraic number theory is necessary. I found a nice little
introductory book on algebraic number theory, `The Theory of
Algebraic Numbers' by Harry Pollard and Harold Diamond. It's an old
book, not contaminated by the excess of abstraction found on modern
books. It helped me a lot to get a grasp on the algebraic concepts.
Cohen's book is hard on the novice but surprisingly useful as one
advances on the subject, and the algorithmic touches certainly help.

As for papers: `Solving Sparse Linear Equations Over Finite Fields'
by Douglas Wiedemann presents an alternate method for the matrix
step. Block Lanczos is probably better, but perhaps Wiedemann's
method has some use, e.g. to develop an embarassingly parallel
algorithm for linear algebra (which, in my opinion, is the current
holy grail of NFS research).


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Factoring an integer using NFS - Part1
PříspěvekNapsal: pon 17 pro, 2012 17:16 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
Factoring an integer using NFS has 3 main steps:

1. Select Polynomial
2. Collect Relations via Sieving (NFS@Home is dedicated to this step)
3. Combine Relations


1. Polynomial Selection


Step 1 of NFS involves choosing a polynomial-pair (customarily shortened to 'a polynomial') to use in the other NFS phases. The polynomial is completely specific to the number you need factored, and there is an effectively infinite supply of polynomials that will work. The quality of the polynomial you select has a dramatic effect on the sieving time; a *good* polynomial can make the sieving proceed two or three times faster compared to an average polynomial. So you really want a *good* polynomial, and for large problems should be prepared to spend a fair amount of time looking for one.

Just how long is too long, and exactly how you should look for good polynomials, is currently an active research area. The approximate consensus is that you should spend maybe 3-5% of the anticipated sieving time looking for a good polynomial.

We measure the goodness of a polynomial primarily by its Murphy E score; this is the probability, averaged across all the possible relations we could encounter during the sieving, that an 'average' relation will be useful for us. This is usually a very small number, and the E score to expect goes down as the number to be factored becomes larger. A larger E score is better.

Besides the E score, the other customary measure of polynomial goodness is the 'alpha score', an approximate measure of how much of an average relation is easily 'divided out' by dividing by small primes. The E score computation requires that we know the approximate alpha value, but alpha is also of independent interest. Good alpha values are negative, and negative alpha with large abo****e value is better. Both E and alpha were first formalized in Murphy's wonderful dissertation on NFS polynomial selection.

With that in mind, here's an example polynomial for a 100-digit input of no great significance:

R0: -2000270008852372562401653
R1: 67637130392687
A0: -315744766385259600878935362160
A1: 76498885560536911440526
A2: 19154618876851185
A3: -953396814
A4: 180
skew 7872388.07, size 9.334881e-014, alpha -5.410475, combined = 1.161232e-008

As mentioned, this 'polynomial' is actually a pair of polynomials, the Rational polynomial R1 * x + R0 and the 4-th degree Algebraic polynomial

A4 * x^4 + A3 * x^3 + A2 * x^2 + A1 * x + A0

The algebraic polynomial is of degree 4, 5, or 6 depending on the size of the input. The 'combined' score is the Murphy E value for this polynomial, and is pretty good in this case. The other thing to note about this polynomial-pair is that the leading algebraic coefficient is very small, and each other coefficient looks like it's a fixed factor larger than the next higher- degree coefficient. That's because the algebraic polynomial expects the sieving region to be 'skewed' by a factor equal to the reported skew above.
The polynomial selection determined that the 'average size' of relations drawn from the sieving region is smallest when the region is 'short and wide' by a factor given by the skew. The big advantage to skewing the polynomial is that it allows the low-order algebraic coefficients to be large, which in turn allows choosing them to optimize the alpha value. The modern algorithms for selecting NFS polynomials are optimized to work when the skew is very large.

NFS polynomial selection is divided into two stages. Stage 1 chooses the leading algebraic coefficient and tries to find the two rational polynomial coefficients so that the top three algebraic coefficients are small. Because stage 1 doesn't try to find the entire algebraic polynomial, it can use powerful sieving techniques to speed up this portion of the search. When stage 1 finds a 'hit', composed of the rational and the leading algebraic polynomial coefficient, Stage 2 then finds the complete polynomial pair and tries to optimize both the alpha and E values. A single stage 1 hit can generate many complete polynomials in stage 2. You can think of stage 1 as a very compute-intensive net that tries to drag in something good, and stage 2 as a shorter but still compute-intensive process that tries to polish things.


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Factoring an integer using NFS - Part2
PříspěvekNapsal: pon 17 pro, 2012 17:16 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
Factoring an integer using NFS has 3 main steps:

1. Select Polynomial
2. Collect Relations via Sieving (NFS@Home is dedicated to this step)
3. Combine Relations


2. Sieving for Relations

The sieving step is not the theoretically most complex part of the algorithm of factorization, but it is the most time consuming part because it iterates over a large domain with some expensive calculations like division and modulo, although some of these can be avoided by using logarithms.
In general optimization of the sieving step will give the biggest reduction in actual running time of the algorithm. It is easy to use a large amount of memory in this step, and one should be aware of this and try to reuse arrays and use the smallest possible data types. The factor bases can for record factorizations contain millions of elements, so one should try to obtain the best on-disk/in-memory tradeoff.

The purpose of the sieving step is to find usable relations, i.e. elements (a, b) with the following properties
• gcd(a, b) = 1
• a + bm is smooth over the rational factor base
• b^deg(f)*f(a/b) is smooth over the algebraic factor base

Finding elements with these properties can be done by various sieving methods like the classical line sieving or the faster lattice sieving, the latter being used at NFS@Home.

The lattice sieving was proposed by John Pollard in "Lattice sieving, Lecture Notes in Mathematics 1554 (1991), 43–49.". The factor bases are split into smaller sets and then the elements which are divisible by a large prime q are sieved. The sizes of the factor bases have to be determined empirically, and are dependent on the precision of the sieving code, if all smooth elements are found or if one skips some by using special-q methods.

One advantage the lattice siever has is the following. The yield rate for the line siever decreases over time because the norms get bigger as the sieve region moves away from the origin. The lattice siever brings the sieve region "back to the origin" when special-q's are changed. This might be its biggest advantage (if there is one).

3. Combine Relations

The last phase of NFS factorization is a group of tasks collectively referred to as 'NFS postprocessing'. You need the factor base file described in the sieving section (only the polynomial is needed, not the actual factor base entries), and all of the relations from the sieving. If you have performed sieving in multiple steps or on multiple machines, all of the relations that have been produced need to be combined into a single giant file. And by giant I mean *really big*; the largest NFS jobs that I know about currently have involved relation files up to 100GB in size.
Even a fairly small 100-digit factorization generates perhaps 500MB of disk files, so you are well advised to allow plenty of space for relations. Don't like having to deal with piling together thousands of files into one? Sorry, but disk space is cheap now.

With the factor base and relation data file available, is is time to perform NFS postprocessing.However, for larger jobs or for any job where data has to be moved from machine to machine, it is probably necessary to divide the postprocessing into its three fundamental tasks. These are described below:

NFS Filtering
-------------

The first phase of NFS postprocessing is the filtering step. This analyzes the input relation file, sets up the rest of the filtering to ignore relations that will not be useful (usually 90% of them or more), and produces a 'cycle file' that describes the huge matrix to be used in the next postprocessing stage.

To do that, every relation is assigned a unique number, corresponding to its line number in the relation file. Relations are numbered starting from zero, and part of the filtering also adds 'free relations' to the dataset. Free relations are so-called because it does not require any sieving to find them; these are a unique feature of the number field sieve, although there will never be very many of them. Filtering is a very complex process. If you do not have enough relations for filtering to succeed, no output is produced other than complaints to that effect. If there are 'enough' relations for filtering to succeed, the result is a 'cycle file'.

How many relations is 'enough'? This is unfortunately another hard question, and answering it requires either compiling large tables of factorizations of similar size numbers, running the filtering over and over again, or performing simulations after a little test-sieving. There's no harm in finding more relations than you strictly need for filtering to work at all, although if you mess up and find twice as many relations as you need then getting the filtering to work can also be difficult. In general the filtering works better if you give it somewhat more relations than it stricly needs, maybe 10% more. As more and more relations are added, the size of the generated matrix becomes smaller and smaller, partly because the filtering can throw away more and more relations to keep only the 'best' ones.

NFS Linear Algebra
------------------

The linear algebra step constructs the matrix that was generated from the filtering, and finds a group of vectors that lie in the nullspace of that matrix. Finding nullspace vectors for a really big matrix is an enormous amount of work. To do the job, Msieve uses the block Lanczos algorithm with a large number of performance optimizations. Even with fast code like that, solving the matrix can take anywhere from a few minutes (factoring a 100-digit input leads to a matrix of size ~200000) to several months (using the special number field sieve on 280-digit numbers from the Cunningham Project usually leads to matrices of size ~18 million). Even worse, the answer has to be *exactly* correct all the way through; there's no throwing away intermediate results that are bad, like the other NFS phases can do. So solving a big matrix is a severe test of both your computer and your patience.

Multithreaded Linear Algebra
----------------------------

The linear algebra is fully multithread aware. Note that the solver is primarily memory bound, and using as many threads as you have cores on your multicore processor will probably not give the best performance. The best number of threads to use depends on the underlying machine; more recent processors have much more powerful memory controllers and can continue speeding up as more and more threads are used. A good rule of thumb to start off is to try two threads for each physical package on your motherboard; even if it's not the fastest choice, just two or four threads gets the vast majority of the potential speedup for the vast majority of machines.

Finally, note that the matrix solver is a 'tightly parallel' computation, which means if you give it four threads then the machine those four threads run on must be mostly idle otherwise. The linear algebra will soak up most of the memory bandwidth your machine has, so if you divert any of it away to something else then the completion time for the linear algebra will suffer.

As for memory use, solving the matrix for a 512-bit input is going to require around 2GB of memory in the solver, and a fast modern processor running the solver with four threads will need about 36 hours. A slow, less modern processor that is busy with other stuff could take up to a week!

NFS Square Root
---------------

With the solution file from the linear algebra in hand, the last phase of NFS postprocessing is the square root.
'an NFS square root' is actually two square roots, an easy one over the integers and a very complex one over the algebraic number field described by the NFS polynomial we selected. Traditionally, the best algorithm for the algebraic part of the NFS square root is the one described by Montgomery and Nguyen, but that takes some quite sophisticated algebraic number theory smarts.
Every solution generated by the linear algebra is called 'a dependency', because it is a linearly dependent vector for the matrix we built.
The square root in Msieve proceeds one dependency at a time; it requires all the relations from the data file, the cycle file from the filtering, and the dependency file from the linear algebra. Technically the square root can be speed up if you process multiple dependencies in parallel, but doing one at a time makes it possible to adapt the precision of the numbers used to save a great deal of memory toward the end of each dependency.
Each dependency has a 50% chance of finding a nontrivial factorization of the input.

msieve is the client used for post-processing phase


In conclusion, factoring an integer using NFS has 3 main steps:

1. Select Polynomial

External to NFS@Home project. If number is GNFS then polynomial search is necessary using GPU or CPU.

2. Collect Relations via Sieving

Made via NFS@Home, only CPU sievers available.

3. Combine Relations

Done by NFS@Home in part, the relations obtained by sieving effort are stored on NFS@Home server. The post-processing phase is done or by external members or in clusters to take advantage of Multithreaded Linear Algebra phase. The latter can be done by using msieve MPI.


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Challenge taking place at BOINCSTATS
PříspěvekNapsal: pon 17 pro, 2012 17:17 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
Hi there!

I think with my four previously post you can get to know very well what NFS@Home does.
Now to the challenge numbers and figures:

To get the most credits you should run the 16e (lasievef) or 16e V5 (lasieve5f) applications. These applications are sieving a number called 2,1037-. For linux users with more than 1.2GB per thread set

Kód:
lasieved - app for RSALS subproject, uses less than 0.5 GB memory: no
lasievee - work nearly always available, uses up to 0.5 GB memory: no
lasievef - used for huge factorizations, uses up to 1 GB memory: no
lasieve5f - used for huge factorizations, uses up to 1 GB memory: yes


in "http://escatter11.fullerton.edu/nfs/prefs.php?subset=project".

For windows users set "yes" instead or "no" for "lasievef". 1.35 GB per thread is needed to run "lasievef" application. If you don't have it chose lasievee or lasieved application. lasieve5f is not available for windows OS.

In terms of points per application the situation is like this:

Kód:
lasieved - 36 points per wu
lasievee - 44 points per wu
lasievef - dead
lasieve5f - 130 points per wu


In global settings at "http://escatter11.fullerton.edu/nfs/prefs.php?subset=global" set

Kód:
Swap space: use at most:  98% of total
Memory: when computer is in use, use at most: 100% of total
Memory: when computer is not in use, use at most: 100% of total



Carlos Pinho

PS( Windows users with more than 2GB/thread of memory with Ubuntu x64 installed under Virtual Box will get the most of it instead of only running NFS@Home under Windows environment (new windows 64 bit version available))

PS 2( Challenges links, part I at http://boincstats.com/en/stats/challenge/team/chat/283 and part II at http://boincstats.com/en/stats/challenge/team/chat/285)


Naposledy upravil Carlos Pinho dne ned 19 črc, 2015 20:20, celkově upraveno 1

Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Re: About NFS@Home in more detail
PříspěvekNapsal: pon 17 pro, 2012 17:17 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
Come and join the challenges.

2,1037- sieve progress (16e and 16e V5 applications):

Last 16e Lattice Sieve application wu received is at ~q=782.000M.
Last 16e Lattice Sieve V5 application wu received is at ~q=959.000M (second chunk restarted at 950M going to 1,000M: meaning going backwards from 1,000M until meet 16e in the middle).
Last 16e Lattice Sieve V5 application already sent all wu's from 1,000M to 1,400M (first chunk).

Overall Q range sieve of 2,1027- started at 20M to 1,400M.

Q range situation is this:
20M-782M (sent through 16e application, remaining wu's close to be done)
782M-950M (unsent)
950M-959M (sent, remaining wu's close to be done)
959M-1000M (unsent, will be sent through 16e V5 application)
1000M-1400M (sent through 16e V5 application, remaining wu's close to be done)

In terms of work still to be done we are talking about ~209k wu's left to be crunched, ~113k already created, ~46k in progress.

Carlos Pinho
(Post-processing helper of NFS@Home Boinc Project)


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Re: About NFS@Home in more detail
PříspěvekNapsal: pon 17 pro, 2012 17:36 
Offline Stats
68.4210526316 %
68.4210526316 %

Registrován: sob 15 kvě, 2010 16:27
Příspěvky: 4454
Bydliště: praha 8
Datum narození: 14 zář 1947
ID CNT statistik: 13496
Carlos Pinho :smt023
tak mu negdo odpovezte gdo zna. ja NfS parkrat pustil, ale jen tak
a mnoho jednotek 16 slo do kytek. patnactky 33iii malo kaloricke

_________________
Obrázek
Obrázek


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Re: About NFS@Home in more detail
PříspěvekNapsal: pon 17 pro, 2012 18:07 
Offline Stats
78.9473684211 %
78.9473684211 %

Registrován: úte 13 led, 2009 15:33
Příspěvky: 6329
Datum narození: 0- 0-1956
ID CNT statistik: 10124
6 hodin pred koncem?
Too late ..... to validate 45nn .

_________________
Prý už není krize; pořád jím plesnivé sýry, piji staré víno a jezdím v autě bez střechy.
UotD 424x
Obrázek 2xObrázek 7xObrázek 9xObrázek Obrázek
Obrázek


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Re: About NFS@Home in more detail
PříspěvekNapsal: pon 17 pro, 2012 18:09 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
eisler jiri píše:
Carlos Pinho :smt023
tak mu negdo odpovezte gdo zna. ja NfS parkrat pustil, ale jen tak
a mnoho jednotek 16 slo do kytek. patnactky 33iii malo kaloricke


I don't know Czech, can't you post in English? The translators give something unreadable.

Thank you in advance,

Carlos Pinho


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Re: About NFS@Home in more detail
PříspěvekNapsal: pon 17 pro, 2012 18:12 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
nenym píše:
6 hodin pred koncem?
Too late ..... to validate 45nn .


There's a second Challenge starting at: http://boincstats.com/en/stats/challenge/team/chat/285

Carlos Pinho


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Re: About NFS@Home in more detail
PříspěvekNapsal: pon 17 pro, 2012 18:33 
Offline Stats
Admin webu a fóra CNT
Admin webu a fóra CNT
Uživatelský avatar

Registrován: čtv 29 bře, 2007 09:41
Příspěvky: 9417
Bydliště: Brušperk, 48 let
ID CNT statistik: 1
Thank Carlos for the invitation.
I'm afraid that the term of challenge is not very suitable for us, surely we prefer challenge on PrimeGrid (18-21.12.), sorry, maybe next time.

_________________
Statistiky CNT | Projekty CNT | Distribuované výpočty CNT | SETI CNT | Einstein CNT
.....::::: Proč se mít nejlépe, když se můžu mít čím dál tím stejně :::::.....
Moje skromná statistika tady , tady , tady nebo grafy.
˙ıɔıqɐɹʞ ʌ ǝןɐ 'ıןʇʎd ʌ ǝɔıɾɐz ǝʇɾndnʞǝu ʎpʞıu ˙˙˙


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Re: About NFS@Home in more detail
PříspěvekNapsal: pon 17 pro, 2012 20:44 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
vkliber píše:
Thank Carlos for the invitation.
I'm afraid that the term of challenge is not very suitable for us, surely we prefer challenge on PrimeGrid (18-21.12.), sorry, maybe next time.


Good luck on that challenge. Hint: don't use CPU to sieve, only GPU.


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Re: About NFS@Home in more detail
PříspěvekNapsal: úte 18 pro, 2012 12:26 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
Second part of the challenge is underway at http://boincstats.com/en/stats/challenge/team/chat/285.

2,1037- figures:

Last 16e Lattice Sieve application wu received is at ~q=801M.
Last 16e Lattice Sieve V5 application wu received is at ~q=970M (second chunk restarted at 950M going to 1,000M: meaning going backwards from 1,000M until meet 16e in the middle).
Last 16e Lattice Sieve V5 application already sent all wu's from 1,000M to 1,400M (first chunk).

Overall Q range sieve of 2,1037- started at 20M to 1,400M.

Q range situation is this:
20M-801M (sent through 16e application, remaining wu's close to be done)
801M-950M (unsent)
950M-970M (sent, remaining wu's close to be done)
970M-1000M (unsent, will be sent through 16e V5 application)
1000M-1400M (sent through 16e V5 application, remaining wu's close to be done)

In terms of work still to be done we are talking about ~179k wu's left to be crunched, ~82k already created, ~46k in progress.

Considering my machine (core i5 750 with cache set to 200, 180 wu's daily done) the leading edge of the undone wu's to the finish ones is about 11M without taking into consideration the repetitive wu's that probably are in there mixed.


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Re: About NFS@Home in more detail
PříspěvekNapsal: stř 19 pro, 2012 23:45 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
Thank you for the cores crunching NFS@Home. Keep them busy!

Carlos Pinho


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Re: About NFS@Home in more detail
PříspěvekNapsal: čtv 20 pro, 2012 09:52 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
Last 16e Lattice Sieve application wu received is at ~q=837M.
Last 16e Lattice Sieve V5 application wu received is at ~q=991M (second chunk restarted at 950M going to 1,000M: meaning going backwards from 1,000M until meet 16e in the middle).
Last 16e Lattice Sieve V5 application already sent all wu's from 1,000M to 1,400M (first chunk).

Overall Q range sieve of 2,1037- started at 20M to 1,400M.

Q range situation is this:
20M-837M (sent through 16e application, remaining wu's close to be done)
837M-950M (unsent)
950M-991M (sent, remaining wu's close to be done)
991M-1000M (unsent, will be sent through 16e V5 application)
1000M-1400M (sent through 16e V5 application, remaining wu's close to be done)


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Re: About NFS@Home in more detail
PříspěvekNapsal: pát 21 pro, 2012 08:17 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
Last 16e Lattice Sieve application wu received is at ~q=849M.
Last 16e Lattice Sieve V5 application wu received is at ~q=932M (third chunk restarted at 30M going to 950M: meaning going backwards from 1,000M until meet 16e in the middle).
Last 16e Lattice Sieve V5 application already sent all wu's from 950M to 1,400M.

Overall Q range sieve of 2,1037- started at 20M to 1,400M.

Q range situation is this:
20M-849M (sent through 16e application, remaining wu's close to be done)
849M-930M (unsent)
930M-932M (sent, remaining wu's close to be done)
932M-950M (unsent)
950M-1400M (sent through 16e V5 application, remaining wu's close to be done)

Lot's of aborted wu's being done on the range below 800M.

If you are only running lasieve5f application please consider also checking lasievef application. Linux users will run both, windows users only lasievef.
NFS@Home is hitting a moment where the two q regions sieved separately by 16e and 16 V5 are going to collide so it is better for linux users to check both applications. I already did and also because let's a lot of aborted wu's below 800M to be done. So I am going to run a mix of the two.


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Re: About NFS@Home in more detail
PříspěvekNapsal: pon 24 pro, 2012 12:43 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
Last 16e Lattice Sieve application wu received is at ~q=921M.
16e Lattice Sieve V5 application already sent all wu's from 930M to 1,400M.

Overall Q range sieve of 2,1037- started at 20M to 1,400M.

Q range situation is this:
20M-921M (sent through 16e application, remaining wu's close to be done)
921M-930M (unsent through 16e application)
930M-1400M (sent through 16e V5 application, remaining wu's close to be done)

16e V5 application started another number (2,1049+) from q=1000M because all wu's for 2,1037- have been distributed. Only missing the left overs wu's. Also I only gave order to my CPU to do the 16e Lattice Sieve wu's instead of both 16 and 16e V5 applications respectively.

In conclusion, 2,1037- will be completely sieved by the end of the year.

Carlos Pinho


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Re: About NFS@Home in more detail
PříspěvekNapsal: stř 26 pro, 2012 04:38 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
Status of 2,1037-:

20948 wu's left to be crunched by applications 16e and 16 e V5 to finally close the 2,1037- sieve.

Status of 2,1049+:


Last 16e Lattice Sieve application wu received is at ~q=23M (goal to 1000M)
Last 16e Lattice Sieve V5 application wu received is at ~q=1046M (started at 1000M, goal to unknown, then backwards from 1000M until meet 16e in the middle)


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Re: About NFS@Home in more detail
PříspěvekNapsal: stř 02 led, 2013 16:04 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
Soon I'll post the status of 2,1037-. Post-processing phase started.

About 2,1049+ NFS@Home sieve:

Last 16e Lattice Sieve application wu received is at ~q=129M (goal to 1000M)
Last 16e Lattice Sieve V5 application wu received is at ~q=1095M (goal to unknown, then backwards from 1000M until meet 16e in the middle)

From now on I'll update weekly.

Carlos


Nahoru
 Profil  
Odpovědět s citací  
 Předmět příspěvku: Re: About NFS@Home in more detail
PříspěvekNapsal: čtv 04 dub, 2013 08:53 
Offline
5.26315789474 %
5.26315789474 %

Registrován: pon 17 pro, 2012 14:45
Příspěvky: 30
Datum narození: 28 úno 1980
NFS@Home April Showers Challenges

April Showers I

http://boincstats.com/en/stats/challenge/team/chat/360

April Showers II

http://boincstats.com/en/stats/challenge/team/chat/362

2,1049+ status:

Last 16e Lattice Sieve application wu received is at ~q=906M (goal to 1000M)
Last 16e Lattice Sieve V5 application wu received is at ~q=1495M (goal to 1650M)

We really need help on the 16e Lattice Sieve V5 application. Go to

http://escatter11.fullerton.edu/nfs/prefs.php?subset=project

and set

Run only the selected applications
lasieved - app for RSALS subproject, uses less than 0.5 GB memory: no
lasievee - work nearly always available, uses up to 0.5 GB memory: no
lasievef - used for huge factorizations, uses up to 1 GB memory: no
lasieve5f - used for huge factorizations, uses up to 1 GB memory: yes

if you are a linux user with more than 1.3GB per thread.

Carlos Pinho


Nahoru
 Profil  
Odpovědět s citací  
Zobrazit příspěvky za předchozí:  Seřadit podle  
Odeslat nové téma Odpovědět na téma  [ Příspěvků: 31 ]  Přejít na stránku 1, 2  Další

Všechny časy jsou v UTC + 1 hodina


Kdo je online

Uživatelé procházející toto fórum: Žádní registrovaní uživatelé a 1 návštěvník


Nemůžete zakládat nová témata v tomto fóru
Nemůžete odpovídat v tomto fóru
Nemůžete upravovat své příspěvky v tomto fóru
Nemůžete mazat své příspěvky v tomto fóru
Nemůžete přikládat soubory v tomto fóru

Hledat:
Přejít na:  
Založeno na phpBB® Forum Software © phpBB Group
Český překlad – phpBB.cz