[cygwin] DD bug fails to wipe last 48 sectors of a disk

Jason Pyeron jpyeron@pdinc.us
Mon Jan 4 21:08:53 GMT 2021


> From: Hashim Aziz
> Sent: Monday, January 4, 2021 3:16 PM
>
> > From: Jason Pyeron 
> > Sent: 30 December 2020 1:35 AM
> > To: mailto:cygwin@cygwin.com <mailto:cygwin@cygwin.com>
> > Subject: RE: [cygwin] DD bug fails to wipe last 48 sectors of a disk
> >
> > > -----Original Message-----
> > > From: Brian Inglis
> > > Sent: Tuesday, December 29, 2020 12:55 PM
> > >
> > > On 2020-12-28 19:41, Jason Pyeron wrote:
> > > > On Monday, December 28, 2020 7:46 PM, Hashim Aziz wrote:
> > > >> On 23 June 2020 8:33 PM, Brian Inglis wrote:
> > > >>> I don't have the facilities to test, and there appear to be *NO* Windows
> > > >>> documentation details on error condition handling, but my suspicion is
> > > >>> that Unix reads and writes fail only *AFTER* reading or writing at the
> > > >>> end of the device, but Windows reads and writes extents may be checked
> > > >>> and failed *BEFORE* reading or writing any data near the end of the
> > > >>> device.
> > > >>> If the actual Windows error code returned is generic, Cygwin would need
> > > >>> to pre-check the device size as Windows does, and reduce read and write
> > > >>> sizes to the allowed maximum at the end of the device.
> > >
> > > >> That's very helpful, thank you. Do you know if any more work has been done
> > > >> to attempt to fix this bug, and whether it's likely to be fixed anytime
> > > >> soon? It's crazy that such a commonly used command leaves so much data
> > > >> unwiped unbeknown to so many users, it's a very serious security hole and
> > > >> the sooner it can be fixed the better.
> > >
> > > > Have you tried iflag=fullblock ? This causes special handling.
> > >
> > > >> I didn't previously see this email, but the point is that this is a bug -
> > > >> dd should not require first making calculations based on the size of each
> > > >> drive or using the smallest possible block size (and hence taking a
> > > >> ridiculous amount of time) in order to do what
> > >
> > > > Do you have any metrics that it is faster, by any meaningful amount? If so I
> > > > would be very interested in mitigating it, but I suspect not the actual
> > > > case.
>
> That dd is faster with a block size of something large like 4M vs the default of 512 or 1024 has 
> long been attested to by anyone who has used these tools on large drives - here's just 
> one https://superuser.com/a/234241/323079 from SuperUser.

But I just copied 1TB in 2 hours on Cygwin dd using a GCD calculated block size. No one is arguing that 512 is slow, the issue seems to be you do not like using a whole multiple of the actual drive size, and are insisting on using a non-divisible block size.

> >
> > > Could you please state explicitly, how many bytes/sectors/blocks/pages/clusters
> > > of what size you expect to get written, and how many
> > > bytes/sectors/blocks/pages/clusters of what size are actually written?
> >
> > As mentioned in the original email and in https://superuser.com/a/1509903/323079, I have confirmed that everything but the last 48 sectors of the drive are wiped on both a 1TB HDD and a 128GB SSD, by inspecting the disks themselves with a hex editor (opening the raw disks in HxD). As someone else said earlier, on both occasions this points to the very last block failing to write properly.
> > BUT, I want a clear test plan first, with clearly articulated issues BECAUSE I do not believe there are any issues actually existing.
> >
> > The best I can glean from the thread is
> >
> > 1. Cygwin is agedly breaking dd, but is also broken in the windows native dd [1]
> > 2. Using "correct" (as I have previously defined it e.g. [4]) values is both
> > 2.1. too slow on the write [2]
> > 2.2. inappropriately complicated of a process for the user
> > 3. It has been well investigated
> > 3.1. with a root cause was identified as a windows OS issue [3,5,6]
> > 3.2. with a mitigation [4,5]
> >
> > I am happy to spend time and money on 2.1.
> > I will not spend my time on dealing with 2.2 - except providing an example and documentation update for upstream.
>
> Isn't the entire purpose of Cygwin to have tools that behave as their upstream equivalents do? 

Yes(ish), but the upstream has this issue to. As their documentation indicates - some OSes are not forgiving.

> Upstream dd has no problems running the same command without these issues,

You indicated that it (windows native dd) has the same issue, whereas on Linux (a different OS) behaves differently.

> and I'm sure upstream would agree on the serious security hole that leaving behind the last 48 sectors of every disk is.

No this appears to be a user error - I do not have the issue, when I execute it correctly.

>
> The only logical conclusion to me here - and it seems bizarre to me that there is any pushback against it at all - is to make Cygwin dd behave like upstream dd does (however that is done and whosoever's patch it is) so that Cygwin dd behaves as expected, and doesn't go around leaving disks with data on them unbeknownst to users. 

There is no patch needed to make dd behave the same, because it is behaving the same.

> Most users of upstream dd are not calculating a "correct" blocksize beforehand because upstream dd doesn't require them to,

RTFM - it says that some OS will require that to be correct.

> and neither should Cygwin dd. Securely wiping data should be as accessible to novice 

Security should not be done by novices with tools not designed for novices - dd is not a disk wiping tool, it is a block device copy tool.

> command lines users as possible, without such barriers as calculating disk sizes in the way, and this is something that upstream is obviously mindful of, so why isn't Cygwin?

Feel free to provide us (me) with the upstream's bug number, and I will track it and participate in testing.



More information about the Cygwin mailing list