On Windows and Linux systems an accidental power loss could cause a recently written file to be corrupted with junk or null bytes (very often when running inside non-Hyper-V VMs which would otherwise have aligned the .vhdx with the physical storage and would flush all the way through; less often on the physical disk but there too; for example corruption happened very often with .vmdks on VMware on Windows, especially to database files).
Some critical Linux services create many backups for the settings file. Like the iSCSI target (LIO) management library rtslib which creates 10 backups for the /etc/target/saveconfig.json (not sure if this helps if someone edits 11 times in fast succession though).
While iOS devices do a clean shutdown when the battery is almost depleted there are still scenarios where accidental power loss can happen: like freezing temperatures which cause the battery to suddenly and unexpectedly lose too much amperage (an aluminium iPhone in freezing wind would die pretty fast under use, or even while idling)
I have a similar situation where a particular settings file is critical in an iOS app.
Do FileHandle.close()
or Data.write(options: .atomic)
on Swift iOS also flush to disk immediately?
If so, does the atomic write flush? Is it before the rename or after?
Should I avoid String.data.write()
?
Is a FileHandle.synchronize()
still necessary on iOS (after String.data.write) to make sure cached data has been written to storage?
After String.data.write()
has completed would opening a file handle to the same file and calling FileHandle.synchronize()
cause the previously written data to sync? Or is it already synced?
2
Answers
Various documentation comments I’ve stumbled upon in the Swift source code on GitHub (I’m currently on an iPad) also do not indicate any kind of dirty buffer syncing. Gonna do things the old fashioned way until things become clearer.
Perhaps a .synchronize option for the .write() function is something we’re gonna get in the future.
atomic means atomic. The file is either written, or there is no change.
I don’t know if it is perfect, but I’m sure nothing you could do would be better. First, they write the data to an unnamed file. That’s where a power cut would most likely hit you. But there is no directory entry yet, so worst case you’d lose the data. Then you make the directory changes, but outside the file system.
So now all they need is ONE atomic write to link all the changes into the file system. And you always want that, you don’t want half written blocks. So you make the hardware check for power input just before you start the actual write (after spinning up the drive, moving to the right track and waiting for the block), and you just need enough power to finish the physical write.
Comments: “sync” will do its best to make sure that all software write calls end up on the disk. This would fail obviously if there is loss of power before “sync”. It would fail badly if there is power loss between the write head starting the first write and finishing the last write. That’s up to the hardware: There is a point where you lose the power input, and a later point where writes fail, possibly in the middle of a write, due to lack of power. That’s not immediate. So if you detect the power cable unplugged (no power input) and don’t start the first write in that case, if power goes away a nanosecond after that, the hardware still needs to be able to finish all writes.
An atomic write would do the syncing itself.