In my case the large majority of files I work with are either UTF-8 with no BOM, or UTF-16LE with no BOM. As I saw mentioned on other bugs and feature requests, these sort of files aren't exactly rare.
As it is now (jEdit can't auto-detect UTF-16LE without BOM) it means that I can't conveniently use jEdit on all of them, since I have to configure it in a way that will consistently load one of these incorrectly and require manual reload.
Worse, it also means that I can't do a search in directory/files that will cover both these file types correctly.
I do recognize that UTF-16 without BOM can't be recognized with 100% reliability. But in my case, and I expect in the large majority of cases, most of the characters in them will be essentially from the ASCII subset. So a heuristic to look at the start of the file, and look for the pattern of alternating \x00 and not-\x00 bytes, would have a very large success rate at correctly guessing if a file is UTF-16LE or UTF-16BE without BOM.
This is also a pattern that is unlikely to be anything else, for the majority of cases where jEdit isn't consistently used for binary files. So the risks of wrong behavior are minimal. (Of course it's also possible to add this heuristic as a configurable option, but my point is that this additional complexity can probably be avoided).