I'm using pdftk and doing some testing and finding that bursting a multipage PDF file into separate single page PDF files, and then generating an md5 hash checksum (digital fingerprint) for each of those single page PDFs results in a different hash every time I do the burst. This is the result even if it's the exact same file with no changes.
My test process is:
Side note: generating a checksum on the PDF after decompression yields the exact same checksum upon repetition.
I'm using node.js and its crypto module for this exercise.
My question is: Why do the checksums differ upon repetition? I would think that the resulting 10 single-page files are exactly the same as the last time they were created. Their parent document (and thus the individual pages themselves) has not changed at all.
According to the PDF spec, whenever a PDF creator writes out a modified PDF, it should update the key named /ModDate
in the /Info
array of metadata entries.
Also, it will (likely) change the document UUID in the PDF's XMP metadata structure to a new ID.
So, when you want to use MD5 (or any similar method) to check for 'stable results' in your PDF generation processes (think of unit tests or whatever), you should do one of these two things before applying your MD5-summing:
sed
) over the files that normalizes the /ModDate
(and possibly also the /CreationDate
) and UUID entries of the files.Update: Since you seem to be familiar with pdftk
already, you should be able to dump a metadata text file (like Ezra showed):
pdftk in.pdf dump_data output data.txt
or (in case you need it):
pdftk in.pdf dump_data_utf8 output data.utf8.txt
Then edit the data*.txt files to make them suite your needs: change the PDF UUIDs (pdftk
calls them PdfID0
/ PdfID1
) to easily recognizable values (00000...
and fffff...
), change the dates to another easily recognizeable one. Then update your files with these metadata values:
pdftk in.pdf update_info data.txt output in-updated.pdf \
&& mv in-updated.pdf in.pdf
or
pdftk in.pdf update_info data.utf8.txt output in-updated.utf8.pdf \
&& mv in-updated.utf8.pdf in.pdf
Only then run your Md5 checksumming and see if it works (or needs some more fine-tuning).
The glib answer is that the checksums because the data is different.
An experiment that I confirmed this:
First, I burst a pdf and move the file:
$pdftk Michael-Jordan-I-Cant-Accept-Not-Trying.pdf burst
$md5sum pg_0001.pdf
150ef33eec73cd13c957194ebead0e93 pg_0001.pdf
$mv pg_0001.pdf 150ef33eec73cd13c957194ebead0e93
Next, I burst the same pdf again, again moving the file:
$pdftk Michael-Jordan-I-Cant-Accept-Not-Trying.pdf burst
$md5sum pg_0001.pdf
49c7c885bc516856f4316452029e0626 pg_0001.pdf
$mv pg_0001.pdf 49c7c885bc516856f4316452029e0626
This confirmed your finding; the sums are different. Upon inspection, it is bytes 91411-92163 that differ.
My gut told me that this was date metadata, and I confirmed this thusly:
$pdftk 150ef33eec73cd13c957194ebead0e93 dump_data output 150.txt
$pdftk 49c7c885bc516856f4316452029e0626 dump_data output 49c.txt
$diff -u 150.txt 49c.txt
--- 150.txt 2012-07-10 11:08:02.371119999 -0600
+++ 49c.txt 2012-07-10 11:08:18.891201910 -0600
@@ -3,9 +3,9 @@
InfoKey: Producer
InfoValue: itext-paulo-155 (itextpdf.sf.net-lowagie.com)
InfoKey: ModDate
-InfoValue: D:20120710105934-06'00'
+InfoValue: D:20120710110010-06'00'
InfoKey: CreationDate
-InfoValue: D:20120710105934-06'00'
-PdfID0: 51671a1a6c4f5e6bb81b88fc7efd14d0
-PdfID1: 82fd646061863972216ccf8a32cf3c7b
+InfoValue: D:20120710110010-06'00'
+PdfID0: 844f34f87275b9184ebe10b82d3397c9
+PdfID1: 8f555a30216e37d77abaf03a4217b2a
NumberOfPages: 1
I'm not sure what your problem is, but if you really need matching sums, two obvious approaches are: