Handling of incorrect MIME types

Discussion of features in Mozilla Firefox
Post Reply
JeroenV
Posts: 54
Joined: August 11th, 2003, 6:52 am

Post by JeroenV »

Dunderklumpen wrote:
JeroenV wrote:Yes I did. The goal is to make a standards compliant browser, but what's the point of making a standards compliant browser if it's not uasable?


Do your think that the developers will abanded the goal to make the browser standard complaint?
Simple answer - yes or no?


There is no simple answer to that question. I think they should implement it, if that's against standards so be it. I don't know if they would, I just think they should. You see misconfigured servers erveywhere, even on a kernel.org mirror. This can't just be ignored...

When it comes to stuff like this, devs should ask themselves what their main priority is: their users or the standards. Before the death of Netscape I would've understood they chose for the standards, but since FB will become an end-user application I think users are more important.

Dunderklumpen wrote:
JeroenV wrote:Most of these things will eventually end up in the core. I don't think FB 1.0 will ship without an icon for example. The reason for these extensions is because FB is still beta software, and we need them to make the browser usable for some of us. That's fine for now, but if we still got to use these extensions when we get to 1.0 there's something seriously wrong.


You must have missed something when it comes to Firebird - extensions are one of it´s main features.


Extensions should extend the functionality of the browser, not "make the browser usable". Extenions are a good idea, but you can't just patch everything with an extension, that'll become too complicated for the users. Basic things like this should not be solved with extensions.
Dunderklumpen
Posts: 16224
Joined: March 9th, 2003, 8:12 am

Post by Dunderklumpen »

JeroenV wrote:There is no simple answer to that question.


The simple answer is no. The developers will not implement this into the core and they will not abanded their goals.

This makes a discussion about anything else than an extension or a plugin a complete waste of time.
JeroenV
Posts: 54
Joined: August 11th, 2003, 6:52 am

Post by JeroenV »

Dunderklumpen wrote:
JeroenV wrote:There is no simple answer to that question.


The simple answer is no. The developers will not implement this into the core and they will not abanded their goals.


So, devs would rather have a non-functional 100% standards compliant browser instead of a 99,9% standards compliant browser that is actually usable? I doubt that. Blindly following your goals and not listening to user input is just stupid. End users don't care about standards, they care about functionality. I know its "wrong" but you have to see it from a user-perspective here.

There is some RFC (I forgot which one) that says "Be strict when sending, but tolerant when receiving". Well, that pretty much sums it up doesn't it?
Dunderklumpen
Posts: 16224
Joined: March 9th, 2003, 8:12 am

Post by Dunderklumpen »

JeroenV wrote:So, devs would rather have a non-functional 100% standards compliant browser instead of a 99,9% standards compliant browser that is actually usable? I doubt that.


Excuse me, but you just don´t get it.

The browser will stick to standard - fact
Handling files with the MimeType that the server sends out is the standard - fact

We are now trying, desperately, to discuss a solution in form of an extension or a plugin - and you still question the basic, fundamental facts above. And you continue to answer every question with a new question. Pointless and a waste of time.

I give up....
User avatar
Paradox52525
Posts: 1219
Joined: April 23rd, 2003, 9:13 am
Location: Middle of nowhere
Contact:

Post by Paradox52525 »

Again I have to say that this entire argument is moot until the code is written to actually DO this. Someone could write it in extension form and then, when we have it working, we can debate whether or not it should be added to the core and possibly submit a patch, but again all of this argument is pointless until the code exists.
Dunderklumpen
Posts: 16224
Joined: March 9th, 2003, 8:12 am

Post by Dunderklumpen »

Paradox52525 wrote:Again I have to say that this entire argument is moot until the code is written to actually DO this. Someone could write it in extension form and then, when we have it working, we can debate whether or not it should be added to the core and possibly submit a patch, but again all of this argument is pointless until the code exists.


Applause...
dtobias
Posts: 2098
Joined: November 9th, 2002, 3:35 pm
Location: Boca Raton, FL
Contact:

Post by dtobias »

JeroenV wrote:There is no simple answer to that question. I think they should implement it, if that's against standards so be it. I don't know if they would, I just think they should. You see misconfigured servers erveywhere, even on a kernel.org mirror. This can't just be ignored...


Well, that particular document comes off the wire with these headers:

Content-Type:·text/plain
Content-Encoding:·x-gzip

which are saying that it's plain text, gzipped for transfer purposes. That is usually dealt with by the browser automatically un-gzipping the data and then displaying it as plain text, and that's what Mozilla does. In this case, once it's unzipped, the data actually *is* plain text (though of a particular format intended for use as a patch file); no raw binary data is there.

What exactly do you expect Mozilla to do with this by second-guessing? And how would you expect its behavior to differ from that when dealing with ordinary plain text with gzip transfer encoding, and how should it know the difference?
Dan's Web Tips: http://webtips.dan.info/
Dan's Domain Site: http://domains.dan.info/
Dan's Mail Format Site: http://mailformat.dan.info/
SuperJeff
Posts: 62
Joined: May 17th, 2003, 4:23 pm

Post by SuperJeff »

dtobias wrote:
JeroenV wrote:There is no simple answer to that question. I think they should implement it, if that's against standards so be it. I don't know if they would, I just think they should. You see misconfigured servers erveywhere, even on a kernel.org mirror. This can't just be ignored...


Well, that particular document comes off the wire with these headers:

Content-Type:·text/plain
Content-Encoding:·x-gzip

which are saying that it's plain text, gzipped for transfer purposes. That is usually dealt with by the browser automatically un-gzipping the data and then displaying it as plain text, and that's what Mozilla does. In this case, once it's unzipped, the data actually *is* plain text (though of a particular format intended for use as a patch file); no raw binary data is there.

What exactly do you expect Mozilla to do with this by second-guessing? And how would you expect its behavior to differ from that when dealing with ordinary plain text with gzip transfer encoding, and how should it know the difference?


If I understand the issue correctly, that particular link is not the same problem we are concerned about. We don't want mozilla to guess content-encoding, we want it to warn the user if (after decoding the content) a document that is labeled as text/plain contains binary characters, thus giving them an option to save it or view it in mozilla anyway. Don't guess anything, just give a warning on cases where we know we are reading a binary file as text/plain.
JeroenV
Posts: 54
Joined: August 11th, 2003, 6:52 am

Post by JeroenV »

dtobias wrote:
JeroenV wrote:There is no simple answer to that question. I think they should implement it, if that's against standards so be it. I don't know if they would, I just think they should. You see misconfigured servers erveywhere, even on a kernel.org mirror. This can't just be ignored...


Well, that particular document comes off the wire with these headers:

Content-Type:·text/plain
Content-Encoding:·x-gzip

which are saying that it's plain text, gzipped for transfer purposes. That is usually dealt with by the browser automatically un-gzipping the data and then displaying it as plain text, and that's what Mozilla does. In this case, once it's unzipped, the data actually *is* plain text (though of a particular format intended for use as a patch file); no raw binary data is there.


Hmm, my mistake. I've got the link from another forum where they were complaining about this, the main kernel.org server serves it like a normal file. I should've checked better.

dtobias wrote:What exactly do you expect Mozilla to do with this by second-guessing? And how would you expect its behavior to differ from that when dealing with ordinary plain text with gzip transfer encoding, and how should it know the difference?


I don't think Mozilla should guess, because this will bring false positives. I think moz should give the user a choice (linke konqueror does) to download the file or view the file as text, and the ability to remember the users' preference if he wants that, like was proposed several times. And not by an extension, as it is a general user-annoyance that should be solved in the core.

This thread gives me worries about the priorities of the Firebird project. It seems the project goal is more important than the users. I hope the devs won't let it come to this.

Oh well, I'm not going to react to this thread anymore. Turned into a flamewar afterall. :-/
fud
Posts: 85
Joined: May 27th, 2003, 3:52 pm

Post by fud »

I haven't read the thread's 10 pages but there's only one case where incorrect mime type really bother me :
I'm given a direct URL like http://www.mysite.com/foo.zip

There's no way to get the file except using IE or creating an HTML document with a link to the file so that I can right click and save.
iamnotniles
Posts: 1293
Joined: December 22nd, 2002, 5:32 am
Location: Dundee, Scotland

Post by iamnotniles »

fud wrote:I haven't read the thread's 10 pages but there's only one case where incorrect mime type really bother me :
I'm given a direct URL like http://www.mysite.com/foo.zip

There's no way to get the file except using IE or creating an HTML document with a link to the file so that I can right click and save.


or you could let all the code rubbish load and then save page as.
User avatar
scratch
Posts: 4942
Joined: November 6th, 2002, 1:27 am
Location: Massachusetts

Post by scratch »

Paradox52525 wrote:That's exactly what I proposed quite a while ago, but again no one actually did anything. There's a thread here:

http://forums.mozillazine.org/viewtopic.php?t=18122

where an extension to detect binary characters in plaintext is being worked on. Currently you can't actually use it though (it's just a testbox you can put a link in) and the detection is far from perfect, but it definetely shows promise. Maybe if someone can actually come up with working code that can detect binary characters in text/plain URLs the devs might soften a bit as to putting it in the core with a dialog like SuperJeff suggested. However since it is technically against standards and extension might still be a better idea. Frankly I'd be happy with either provided there's a working solution. This is a *very* annoying problem, and it's noticable enough to non-powerusers that it could harm Mozilla's reputation with new users.


there's already code that does this in Mozilla for files sent over FTP, because FTP doesn't have mimetypes. it looks for a null byte within the first 1024 characters of the file.
User avatar
danieljackson
Posts: 83
Joined: July 22nd, 2003, 4:03 pm
Location: UK

Post by danieljackson »

I think a better test would be for any character that's not between ASCII 9-13 or 32-127 in the first x bytes of anything sent as "text/plain" (or without any MIME?) [EDIT: I suppose the second range should be extended from 32-255 to allow accented chars and UTF-8, leaving just 0-8 and 14-31 for detecting "binary"]

Then display (as mentioned before):

"The server sent this document as plain text, but it appears to be a data file: Save file as..., Open as text, [checkbox] remember my choice"


If the OS has got another way of determining file type (e.g. in Windows, applications can register file extensions) maybe the box could also show "Open with [application]..." to be extra nice.

I can't really see how anyone could object, it's a simple fix for such a common problem that will really put novice users off for good (and annoy even experts).

...and the "power user" can very easily go back to viewing his .RAR files as plaintext rubbish if he wants. :lol:
Last edited by danieljackson on August 14th, 2003, 2:15 am, edited 1 time in total.
Dunderklumpen
Posts: 16224
Joined: March 9th, 2003, 8:12 am

Post by Dunderklumpen »

If a file is being sent out as plain text, has characters in it that says that it most likely is not a text file - would it be possible to check the extension of the filename (.rar, .wba etc. etc.) and then let the user decide what to do with it?

I think that´s what "another" browser is doing....
User avatar
scratch
Posts: 4942
Joined: November 6th, 2002, 1:27 am
Location: Massachusetts

Post by scratch »

Dunderklumpen wrote:If a file is being sent out as plain text, has characters in it that says that it most likely is not a text file - would it be possible to check the extension of the filename (.rar, .wba etc. etc.) and then let the user decide what to do with it?

I think that´s what "another" browser is doing....


nope, the other browser determines type for all things, not just binary things. so, for example, if you have an html doc sent out as text/plain, IE still displays it as HTML. with this behavior, it would be displayed as plain text still. I think that if anything is going to be done, it should just pop up a dialog saying it claims to be plaintext but seems to actually be binary, so the server is probably misconfigured. then it could allow the user to either view it as text or save it.
Post Reply