Hi Dave,
On 27/10/16 19:30, Paul Smedley wrote:
Hi Dave,
On 23/10/16 13:41, Dave Yeo wrote:
Paul Smedley wrote:OK, will try come up with a testcase, then re-test with GCC 6.2
Hi Dave,
On 23/10/16 04:18, Dave Yeo wrote:
Our GCC does have a bug with SSE+ instructions, it doesn't align them >>>>> properly unless forced with -mstackrealign (there's alsoThis might be fixed in GCC 6.x - do you have a simple testcase?
-mpreferred-stack-boundary=4)
You should be able to just do some floating point math and compile with
-msse2 -mfpmath=sse and see if it crashes with the xmms registers not
aligned. Might want to target a newer architecture as well.
Dave
http://smedley.id.au/tmp/sse2.cpp runs fine with -msse2 when built with
g++ from 4.4.6, 4.9.2 and 6.2.0
Only 6.2.0 has the stack align changes :/
I might try compiling qtgui4.dll with gcc 6.2.0 without the changes from https://trac.netlabs.org/qt4/ticket/187 - that at least is a known
failure case...
On Sun, 23 Oct 2016 16:35:43 UTC, David Parsons
<dwparsons@t-online.de> wrote:
What does the /DUP switch do, is there a list of switches available anywhere?
Dave P.
"/DUP" is undocumented, and is really meant for development, but it
does show the version that is installed.Try it, and it will make
sense. I don't know of any others.
On Sun, 23 Oct 2016 18:09:43 UTC, Dave Yeo <dave.r.yeo@gmail.com>
wrote:
Hi Dave,
There's also the question of whether OS/2 actually supports SSE.
A testcase that was suggested to me was running the Flac testsuite in
twice as many sessions as the number of enabled cores. While one session
passed the testcase, multiple sessions failed until only one was left
which passed.
This implies that the OS/2 kernel does not save the XMMS registers
during a context switch and SSE (and MMX?) code will fail if more then
one thread accesses the XMMS registers
If I understand the Intel docs, it should. If the hardware indicates
that support exists, current kernels will use fxsave/fxrstr and will
set cr4.OSFXSR which means the XMM and MXCSR registers will be saved
and restored by fxsave/fxrstr. Of course, this does not mean that the
code is defect free.
BTW, what do you mean by XMMS registers? This is a term used by the
Intel docs.
I'm also pretty sure that most developers would build
a 386 version on request. I know I would.
> I'll monitor the newsgroups as long as I can.
O, I know. Bless you. Over here most ISPs won't even offer Usenet
anymore. Even my email may stop working, according to a rather old annoucement of my ISP (former ibm.net).
A.D. Fundum <what.ever@neverm.ind> wrote:
>> The latest FF assumes an eCS 2.x environment
> The latest FF, which Bitwise compiled to target a i486
> due to the need for the GCC atomics, runs fine on my
> OS/2 ver 4+fp15
In general it doesn't. FF + OS/2 = missing DLLs. You had to eCS 2'ify
your OS/2 system, which requires quite a few extraordinary efforts
from an user's point of view (the requirements of requirements of
requirements of requirements of required software). It will work, wehy
not, but I wouldn't sell it as OS/2 software. Your customers will
complain that they cannot find all DLLs, and the solution to obtain
those DLLs ... su^G^Gblows. Engineers (it works) vs humans. ;-)
Make a statically linked version?
On Sat, 22 Oct 2016 17:48:37 UTC, Dave Yeo <dave.r.yeo@gmail.com>
wrote:
Hi Dave,
There is a field in the LX header which specifies the CPU Type but all
files show i386 even though I believe that many are at i686.
As Dave noted this is probably gcc. The relevant wlink code is
case CMT_WAT_PROC_MODEL:
case CMT_MS_PROC_MODEL:
proc = GET_U8_UN( ObjBuff ) - '0';
if( proc > FmtData.cpu_type )
FmtData.cpu_type = proc;
break;
and
if( FmtData.cpu_type <= 3 ) {
exe_head.cpu_type = OSF_CPU_386;
} else {
exe_head.cpu_type = OSF_CPU_486;
}
which says that the CPU type comes from the OMF record and wlink maps
the CPU type to either 386 or 486.
I've complied a test program here with 'gcc -march=i686 test.c' which
compiles and runs successfully but it also shows the CPU type as i386.
What linker did you use and did you check the COMMENT records
generated by gcc?
Could it be that the -march switch is silently ignored in our gcc builds? >>> For the record I used gcc v4.5.2.
That's pretty antique.
It's the linkers job to set fields in the LX header, all GCC does is
produce the correct assembly code and obj files and ask the linker to
link them into an executable (or DLL).
That's sorta true. As noted above, gcc is responsible for indicating
the CPU type required by each object model.
Wlink could be patched (perhaps already is?) to use i686 in the CPU
field but I don't know if the kernel would be happy to load something
besides i386.
AFAIK, it would not care. I've not seen any code that cares about the
CPU type field in the LX header.
Steven
I don't know which linker was used to produce the netlabs files but presumably it doesn't set it either.
>> FF + OS/2 = missing DLLs. You had to eCS 2'ify your OS/2
>> system, which requires quite a few extraordinary efforts from
>> an user's point of view (the requirements of requirements of
>> requirements of requirements of required software).
> Make a statically linked version?
That was discussed in comp.os.os2.apps, and I'm not expecting that
Dave will release a fully statically linked SM version of BWW's FF for
an eCS 2.x environment. IIRC he has already reduced the number of
required DLLs in the past, but as such FF is BWW's impressive project.
Releasing the DLLs as "RPM" ZIP files could work too, to offer people
an alternative way to manage their own packages, but apparently
"unification" was more important than users.
FWIW, BWW has complained ("press release") that people have to contact
them. I'm not using eCS 2.x DE/EN nor an eCS 2.x environment, I'm not
using the FF for OS/2 in question (because of their excessive
requirements), and I tend to not create accounts to report or discuss
RFCs, policies and bugs.
As for GCC, the latest I've found on Paul's site is 4.9.2, not so much
newer and anyway I don't use C very often & when I do I normally use
Watcom.
Hi Dave,
On 29/10/16 23:07, David Parsons wrote:
As for GCC, the latest I've found on Paul's site is 4.9.2, not so much
newer and anyway I don't use C very often & when I do I normally use
Watcom.
6.2.0 is available - I posted a link at os2world. It's probably tested well enough that I should add it to my site as well now....
mountains hide the satellites.
Canada is probably the worst country in the world
for internet and cell.
unrpm is also available to extract the files.
They do use @UNIXROOT
Arca Noae is willing to support OS/2, eg with a subscription
> unrpm is also available to extract the files.
Retorical: most files, not all files? I'll look at it again after a
new major upgrade, albeit most of my hardware probably won't support
FF49+ anyway (SSE2-related, FF49 for Windows requires a Pentium 4+).
> They do use @UNIXROOT
To be fair, the main problem of a real Unix directory structure is
that I don't have it (free boot disk space may become a problem too).
There's no "Unixification Package for OS/2 and eCS 1.x". OTOH, if I
would install eCS 2, and eCS 2 would introduce a real Unix directory structure (i.e. not an avoidable directory called "@UNIXROOT"), then I
may smile and enjoy my new OS while it's being installed by the
installer of the OS. IOW: I'll use Unix if I like an Unix directory structure, but I wouldn't uninstall it. In the end it's an acceptable
PITA, if an installer for OS/2/eCS 1.x installs the basics.
> Arca Noae is willing to support OS/2, eg with a subscription
Well, as long as it's clear. I'd prefer to use eCS 1.2 (USB support,
new APIs since my W4 FP9) and rather old "new" hardware (T60p, too
many cores, apparently even after disabling all but one core). At the
moment that's my best theoretical combination, but now I cannot even
install eCS 1.2. What I want/need is quite clear, from an user's point
of view. eCS 2.x works, but all I've seen is the demo CD.
Run ANPM (freely available from the Arca Noae site)
and it'll install the basic @UNIXROOT stuff on its first
run.
The subscription would also support eCS 1.2
= 1 GHz) PIIIs now is 0. It looks like I have to update a few files.I'm not sure if ACPI is involved, it's one of the oldest IBM/Lenovo
You cannot solve your problems by knowing what is on the disk. You can
only solve them by getting YUM to know what it thinks it installed.
That means letting YUM do it. ANPM simply runs YUM for you.
On 30-10-16 00:22, Doug Bissett wrote:
You cannot solve your problems by knowing what is on the disk. You can
only solve them by getting YUM to know what it thinks it installed.
That means letting YUM do it. ANPM simply runs YUM for you.
Of course I can. I only have to identify which files to update and then install
the packages which contain them using ANPM/YUM. From then on ANPM will manage those packages.
Dave P.
Generally wl.exe (wlink with support for our debugging support).
You could open an issue at the Netlabs RPM site to add Steven's patch
and restored by fxsave/fxrstr. Of course, this does not mean that the
code is defect free.
That's what I thought (assuming you mean when doing a context switch),
but the FLAC failure implies otherwise, at least in an SMP environment.
On Sat, 29 Oct 2016 18:43:47 UTC, Dave Yeo <dave.r.yeo@gmail.com>
wrote:
Hi Dave,
Warpstock kept me away from the newsgroups...
Generally wl.exe (wlink with support for our debugging support).
You could open an issue at the Netlabs RPM site to add Steven's patch
What patch? Do you mean the wlink with HLL debug support? I've not
checked, but I would be surprised if it's not already available from
the Netlabs rpm since it's used to build Firefox.
As for the linker, it is an old version of wl.exe which seems to be a patched version of wlink v1.6.
I don't have the source for it and I don't know now where I got it from so I don't know what is patched.
When you invoke it, it says something about
experimental HLL.
On Wed, 26 Oct 2016 15:37:35 UTC, David Parsons <dwparsons@t-online.de> wrote:
On 25-10-16 18:50, Doug Bissett wrote:
On Sun, 23 Oct 2016 16:35:43 UTC, David Parsons <dwparsons@t-online.de>
wrote:
What does the /DUP switch do, is there a list of switches available
anywhere?
Dave P.
"/DUP" is undocumented, and is really meant for development, but it does >>> show the version that is installed.Try it, and it will make sense. I
don't know of any others.
Ok, It helps a bit but I presume the platform column shows which packages
it has available or has installed itself and not the file version on disk
which was installed from a zip file before ANPM became available.
Still I think it will be helpful, thanks.
Dave P.
You should never get to the files on the disk, from before you installed RPM/YUM (PATH and LIBPATH need to have the YUM paths first, although some insist that the dot entry needs to be first in LIBPATH), however, it is still possible to use the wrong file, so you really should eliminate ALL duplicates of files that are in the \usr directory structure, everywhere else on your system (with logical exceptions). Once you get that mess cleaned up, ANPM (actually YUM) will control the versions for you, and all you need to do is keep them up to date (although that has also been known to cause problems when the packages are not built properly).
On 26-10-16 21:15, Doug Bissett wrote:
On Wed, 26 Oct 2016 15:37:35 UTC, David Parsons <dwparsons@t-online.de> wrote:
On 25-10-16 18:50, Doug Bissett wrote:
On Sun, 23 Oct 2016 16:35:43 UTC, David Parsons <dwparsons@t-online.de> >>> wrote:
What does the /DUP switch do, is there a list of switches available
anywhere?
Dave P.
"/DUP" is undocumented, and is really meant for development, but it does >>> show the version that is installed.Try it, and it will make sense. I
don't know of any others.
Ok, It helps a bit but I presume the platform column shows which packages >> it has available or has installed itself and not the file version on disk >> which was installed from a zip file before ANPM became available.
Still I think it will be helpful, thanks.
Dave P.
You should never get to the files on the disk, from before you installed RPM/YUM (PATH and LIBPATH need to have the YUM paths first, although some insist that the dot entry needs to be first in LIBPATH), however, it is still
possible to use the wrong file, so you really should eliminate ALL duplicates
of files that are in the \usr directory structure, everywhere else on your system (with logical exceptions). Once you get that mess cleaned up, ANPM (actually YUM) will control the versions for you, and all you need to do is keep them up to date (although that has also been known to cause problems when the packages are not built properly).
When I finally got fed up trying to keep adding extra dependencies for Firefox &
Thunderbird by extracting the files from the zip & rpm files on Netlabs and decided to use ANPM, I already had a very mature /usr tree for other programs which I wanted to keep & I did not want yet another linux directory structure.
I eliminated any duplicates & moved a few stray linux orientated files into /usr/... before running ANPM for the first time.
ANPM added the directories at the beginning of PATH & LIBPATH leaving the existing entries where they were duplicating the directories in PATH & LIBPATH
which I later removed.
I had expected ANPM would overwrite any files already on the disk with its version but it seems that it did not overwrite all unfortunately.
So now to the present problem. The Firefox 38.*.* series are appallingly slow at
scrolling and after updating a few files for another reason I noticed that the
scrolling was noticeably better with these updates which were i686 files so I wanted to see if there were any other i386 files which I could update & perhaps
gain a bit better performance.
This led to the problem of how to identify i386 versions on the disk.
This should have been easy, just look at the CPU_Type field in the LX header. As we now know, that field is not set as expected by wl.exe without Steven's patch.
I don't know which linker was used to produce the netlabs files but presumably
it doesn't set it either.
HTH...
Dave P.
On Thu, 10 Nov 2016 16:52:30 UTC, Dave Yeo <dave.r.yeo@gmail.com>
wrote:
Hi Dave,
Maybe I miss understood. Didn't you say that you'd patched wlink to
report the CPU architecture? If so, that was the patch I meant. If not,
sorry for the noise.
Perhaps I did not state things clearly. What I posted was the
existing code that sets the CPU type field. This could be modified to
report other CPU types, but I have no plans to do this since it the
field is effectively ignored by OS/2.
On Sat, 29 Oct 2016 12:37:38 UTC, David Parsons
<dwparsons@t-online.de> wrote:
When you invoke it, it says something about
experimental HLL.
You mean like:
[d:\tmp]wlink
** EXPERIMENTAL (HLL) ** Open Watcom Linker Version 2.0beta1 Limited Availability
Portions Copyright (c) 1985-2002 Sybase, Inc. All Rights Reserved.
Source code is available under the Sybase Open Watcom Public License.
See http://www.openwatcom.org/ for details.
Press CTRL/Z and then RETURN to finish
Steven
Maybe I miss understood. Didn't you say that you'd patched wlink to
report the CPU architecture? If so, that was the patch I meant. If not,
sorry for the noise.
Hello all,
I upgraded a few DLLs used by Firefox & co to i686 standard recently and noticed
a significant performance improvement so I'm now want to update the rest to i686
also. I've been able to do some with the help of ANPM (it would be easier if ANPM showed individual file dates & size) but I can not find a way to tell what
standard a particular file is at.
There is a field in the LX header which specifies the CPU Type but all files show i386 even though I believe that many are at i686.
I've complied a test program here with 'gcc -march=i686 test.c' which compiles
and runs successfully but it also shows the CPU type as i386.
Could it be that the -march switch is silently ignored in our gcc builds?
For the record I used gcc v4.5.2.
I've searched on the internet and read though many pages of switches available
for gcc but not found anything other than march & mtune.
Does anyone know if there are any other switches I should use or have any advice?
Regards,
Dave P.
Hello all,
I upgraded a few DLLs used by Firefox & co to i686 standard recently and noticed a significant performance improvement so I'm now want to update
the rest to i686 also. I've been able to do some with the help of ANPM
(it would be easier if ANPM showed individual file dates & size) but I
can not find a way to tell what standard a particular file is at.
There is a field in the LX header which specifies the CPU Type but all
files show i386 even though I believe that many are at i686.
I've complied a test program here with 'gcc -march=i686 test.c' which compiles and runs successfully but it also shows the CPU type as i386.
Could it be that the -march switch is silently ignored in our gcc builds?
For the record I used gcc v4.5.2.
I've searched on the internet and read though many pages of switches available for gcc but not found anything other than march & mtune.
Does anyone know if there are any other switches I should use or have
any advice?
Hello all,
I upgraded a few DLLs used by Firefox & co to i686 standard recently and noticed a significant performance improvement so I'm now want to update
the rest to i686 also. I've been able to do some with the help of ANPM
(it would be easier if ANPM showed individual file dates & size) but I
can not find a way to tell what standard a particular file is at.
There is a field in the LX header which specifies the CPU Type but all
files show i386 even though I believe that many are at i686.
I've complied a test program here with 'gcc -march=i686 test.c' which compiles and runs successfully but it also shows the CPU type as i386.
Could it be that the -march switch is silently ignored in our gcc builds?
For the record I used gcc v4.5.2.
I've searched on the internet and read though many pages of switches available for gcc but not found anything other than march & mtune.
Does anyone know if there are any other switches I should use or have
any advice?
There is a field in the LX header which specifies the CPU Type but all
files show i386 even though I believe that many are at i686.
I've complied a test program here with 'gcc -march=i686 test.c' which compiles and runs successfully but it also shows the CPU type as i386.
Could it be that the -march switch is silently ignored in our gcc builds?
For the record I used gcc v4.5.2.
There is a field in the LX header which specifies the CPU Type but all
files show i386 even though I believe that many are at i686.
I've complied a test program here with 'gcc -march=i686 test.c' which
compiles and runs successfully but it also shows the CPU type as i386.
Could it be that the -march switch is silently ignored in our gcc builds?
For the record I used gcc v4.5.2.
Our GCC's understand -march (at least as current when released) and
-mtune fine and for almost all CPU's i686 is a much better choice as it includes a few needed instructions for things like atomic operations and perhaps more importantly aligns the instructions such that modern CPU's perform much better.
You can test by targeting a newer CPU then you're using (might need
floating point code) and getting a sigill (illegal instruction).
A while back I built a SM targeted at my desktop C2D and it wouldn't run
on P4's and built a TB targeted at my laptop's Pentium M which ran most everywhere.
Targeting a CPU with SSE2+ instructions really helps floating point math
as it is done with the SSE+ instructions and registers instead of the
i387+ co-processor.
And it sadly fails on some non Intel CPUs.
Our GCC does have a bug with SSE+ instructions, it doesn't align them properly unless forced with -mstackrealign (there's also -mpreferred-stack-boundary=4)This might be fixed in GCC 6.x - do you have a simple testcase?
> And it sadly fails on some non Intel CPUs.
Only optimize if you have to, without introducing avoidable implicit requirements. Don't optimize "because it's faster/cool". Ignoring
Mozilla's latest SSE2 restrictions, a Pentium II is a good bottom line
for FF/SM/TB because of the installable memory, but not because of the
CPU.
I'ld recommend to use a compier's default, without having to document
the requirements. If each 1% may count (video, mdern browser, and so
on)(, then try to select a Pentium II. If you're requiring a CPU newer
than a Pentium 4, then you've lost most of the OS/2
community.Silently. In most of the world's languages there's no such
thing as OS/2/eCS for modern hardware.
Hi Dave,
On 23/10/16 04:18, Dave Yeo wrote:
Our GCC does have a bug with SSE+ instructions, it doesn't align themThis might be fixed in GCC 6.x - do you have a simple testcase?
properly unless forced with -mstackrealign (there's also
-mpreferred-stack-boundary=4)
On 22.10.16 19.48, Dave Yeo wrote:
There is a field in the LX header which specifies the CPU Type but all
files show i386 even though I believe that many are at i686.
I've complied a test program here with 'gcc -march=i686 test.c' which
compiles and runs successfully but it also shows the CPU type as i386.
Could it be that the -march switch is silently ignored in our gcc
builds?
For the record I used gcc v4.5.2.
Our GCC's understand -march (at least as current when released) and
-mtune fine and for almost all CPU's i686 is a much better choice as it
includes a few needed instructions for things like atomic operations and
perhaps more importantly aligns the instructions such that modern CPU's
perform much better.
Most atomic instructions are available since 486. Only 64 and 128 bit
atomics are introduced later.
But modern CPUs cannot be classified by a single scalar value anymore.
Some features also get removed with newer ones. Others depend on the manufacturer or product line.
You can test by targeting a newer CPU then you're using (might need
floating point code) and getting a sigill (illegal instruction).
A while back I built a SM targeted at my desktop C2D and it wouldn't run
on P4's and built a TB targeted at my laptop's Pentium M which ran most
everywhere.
Using march has its drawbacks. If you target to a specific CPU type you cannot assume that it runs on everything newer.
Targeting a CPU with SSE2+ instructions really helps floating point math
as it is done with the SSE+ instructions and registers instead of the
i387+ co-processor.
And it sadly fails on some non Intel CPUs.
That's true, but if targeting i486, why not target i686?
I started targeting i686 during the Mozilla 10ESR cycle
and have yet to have a complaint about sigills
Seems that most every OS/2 user has something that handles
i686 instructions
Hello all,
I upgraded a few DLLs used by Firefox & co to i686 standard recently and noticed a significant performance improvement so I'm now want to update
the rest to i686 also. I've been able to do some with the help of ANPM
(it would be easier if ANPM showed individual file dates & size) but I
can not find a way to tell what standard a particular file is at.
There is a field in the LX header which specifies the CPU Type but all
files show i386 even though I believe that many are at i686.
I've complied a test program here with 'gcc -march=i686 test.c' which compiles and runs successfully but it also shows the CPU type as i386.
Could it be that the -march switch is silently ignored in our gcc builds?
For the record I used gcc v4.5.2.
I've searched on the internet and read though many pages of switches available for gcc but not found anything other than march & mtune.
Does anyone know if there are any other switches I should use or have
any advice?
Regards,
Dave P.
I am fairly sure that the linker is supposed to correctly set this field.
But I would not be surprised if any linker ever written for OS/2 just ignores to
properly set this field.
Additionally the LX spec only defines these values for CPU type:
01H - 80286 or upwardly compatible CPU is required to execute this module.
02H - 80386 or upwardly compatible CPU is required to execute this module.
03H - 80486 or upwardly compatible CPU is required to execute this module.
That said, a 16-bit OS/2 linker will likely set this field to 01h whereas a 32-bit OS/2 linker will set it to 02h just to indicate that a
32-bit CPU is needed as a 16-bit CPU is not enough for OS/2 >= version 2.0.
Anything higher than a 486 is therefore undefined in the standard.
I don't think you can rely on anything set in this field.
Lars
On Sat, 22 Oct 2016 16:49:08 UTC, David Parsons
<dwparsons@t-online.de> wrote:
Hello all,
I upgraded a few DLLs used by Firefox & co to i686 standard recently and noticed
a significant performance improvement so I'm now want to update the rest to i686
also. I've been able to do some with the help of ANPM (it would be easier if >> ANPM showed individual file dates & size) but I can not find a way to tell what
standard a particular file is at.
Using the YUM command line should tell you what you have installed.
Or, add "/DUP" as a parameter to ANPM. Just be careful what you select
when you install things.
ANPM has an outstanding request to add a feature to be able to change between versions, but it has not been done yet. Note that file date &
size means nothing, and individual files seem to have no information
about what version they are (no BLDLEVEL information). Some of them, apparently, have a way to query their version, but it seems that it is inconsistent, and you need to know exactly how to do it for different
files.
Our GCC's understand -march (at least as current when released) and -mtune fine
and for almost all CPU's i686 is a much better choice as it includes a few needed instructions for things like atomic operations and perhaps more importantly aligns the instructions such that modern CPU's perform much better.
You can test by targeting a newer CPU then you're using (might need floating point code) and getting a sigill (illegal instruction).
A while back I built a SM targeted at my desktop C2D and it wouldn't run on P4's
and built a TB targeted at my laptop's Pentium M which ran most everywhere. Targeting a CPU with SSE2+ instructions really helps floating point math as it
is done with the SSE+ instructions and registers instead of the i387+ co-processor.
Our GCC does have a bug with SSE+ instructions, it doesn't align them properly
unless forced with -mstackrealign (there's also -mpreferred-stack-boundary=4)
It's the linkers job to set fields in the LX header, all GCC does is produce the
correct assembly code and obj files and ask the linker to link them into an executable (or DLL).
Wlink could be patched (perhaps already is?) to use i686 in the CPU field but I
don't know if the kernel would be happy to load something besides i386. I guess
it would be easy enough to use a hex editor to test and if it works, to file a
feature request bug.
Ideally our uname should be updated to report -i686 as well so the toolchain automatically targets it
Dave
Out of curiosity I patched the CPU Type field of my test program to
i486, i586, i686 and i786 and they all compiled and ran successfully
even the non-existent i786.
many people have something newer then a P4 though
wouldn't target anything newer except as an alternative.
The big advantage of targeting i686 is aligning the instructions
on word boundaries, which should benefit most all newer CPU's
> many people have something newer then a P4 though
> wouldn't target anything newer except as an alternative.
The latest FF assumes an eCS 2.x environment, which you'll have to buy
for anything newer than a Pentium 4.
> The big advantage of targeting i686 is aligning the instructions
> on word boundaries, which should benefit most all newer CPU's
So start writing software for eCS. AFAICT this always implies the use
of a i686+. The remaining users of OS/2, including the well-known and appriciated Korean FP15 example, will be aware of possible problems.
With a "SeaMonkey for eCS", the bottom line is a i686. But please use
386 as a target for a CONFIG.SYS sorter for OS/2. Typically I'll use
the default setting of VAC, a 386, despite of the fact that even I
don't have a 386. AFAIK the installer of eCS requires a i686, so eCS
is a clear generic threshold and it implies the use of a i686+ CPU.
If your software assumes an eCS 2 environment (YUK installer, and so
on), then "SeaMonkey for eCS 2" will be better and clear. Lenovo is
the threshold. All of my OS/2 or eCS hardware in use is still IBM, 486
upto and including a Pentium 4. I'm still waiting for a translated eCS
2+...
The bottom line of M$ Windows updates with FF49+ updates
(SSE2-related) is a Pentium 4. The main reasons why I won't complain
about a FF49 for eCS 2.x+ (i.e. a eCS 2.x environment) is because ...
I'm not using it.
Anyway, I'ld suggest to use a compiler's default target. If it has a
real use, then target a i686 (100% CPU load, video, significant speed
gains), then perhaps rename your product to "<name> for eCS". But
please don't target a i686 just because you've studied and understood compiler switches without a significant gain.
On 22-10-16 19:48, Dave Yeo wrote:
Our GCC's understand -march (at least as current when released) and
-mtune fine
and for almost all CPU's i686 is a much better choice as it includes a
few
needed instructions for things like atomic operations and perhaps more
importantly aligns the instructions such that modern CPU's perform
much better.
You can test by targeting a newer CPU then you're using (might need
floating
point code) and getting a sigill (illegal instruction).
A while back I built a SM targeted at my desktop C2D and it wouldn't
run on P4's
and built a TB targeted at my laptop's Pentium M which ran most
everywhere.
Targeting a CPU with SSE2+ instructions really helps floating point
math as it
is done with the SSE+ instructions and registers instead of the i387+
co-processor.
Our GCC does have a bug with SSE+ instructions, it doesn't align them
properly
unless forced with -mstackrealign (there's also
-mpreferred-stack-boundary=4)
It's the linkers job to set fields in the LX header, all GCC does is
produce the
correct assembly code and obj files and ask the linker to link them
into an
executable (or DLL).
Wlink could be patched (perhaps already is?) to use i686 in the CPU
field but I
don't know if the kernel would be happy to load something besides
i386. I guess
it would be easy enough to use a hex editor to test and if it works,
to file a
feature request bug.
Ideally our uname should be updated to report -i686 as well so the
toolchain
automatically targets it
Dave
Agreed, but it has to be told somehow. I had hoped the compiler would
pass the information on to the linker but if it does, it seems the
linker ignores it.
Out of curiosity I patched the CPU Type field of my test program to
i486, i586, i686 and i786 and they all compiled and ran successfully
even the non-existent i786.
Paul Smedley wrote:
Hi Dave,
On 23/10/16 04:18, Dave Yeo wrote:
Our GCC does have a bug with SSE+ instructions, it doesn't align themThis might be fixed in GCC 6.x - do you have a simple testcase?
properly unless forced with -mstackrealign (there's also
-mpreferred-stack-boundary=4)
You should be able to just do some floating point math and compile with -msse2 -mfpmath=sse and see if it crashes with the xmms registers not aligned. Might want to target a newer architecture as well.
Dave
> That's true, but if targeting i486, why not target i686?
Why target i686? Typically you'll have to document it, because OS/2 is
i386+. Or rename a product to "... for eCS". eCS is i686+.
> I started targeting i686 during the Mozilla 10ESR cycle
> and have yet to have a complaint about sigills
Mozilla is the best example why a i686 can be a good target, because
browsing with a i486 would be "insane". But there's no reason why a CONFIG.SYS sorter should target the i686 family.
> Seems that most every OS/2 user has something that handles
> i686 instructions
You'ld use older $5 hardware for specific apps which have stopped
working, for example due to required old native video drivers.
If using involves browsing of third-party websites, then a i686 is the
bottom line because of the installable memory. If a i686 would be the
default of GCC, then remaining users won't complain indeed, and owners
of old hardware will be familiar with such a PITA.
OTOH I'm not using the latest FF due to the assumed eCS 2.x
environment, so I've left your Mozilla for OS/2 community "silently".
In a nutshell, a 386 is the best default target. Same as the OS.
That's why. Otherwise it should be documented as a specific
requirement. If your users are probably using a modern browser,
including video, and your app requires power (video) or each 1% counts (video), then a i686 can be a documented option. You won't even use a
i686 to watch a video. If you assume the use of anything newer than a
Pentium III, then people will leave your community silently. If the
developer assumes the use of at least two cores, then even more people
will leave the community silently. FF49+ requires a Pentium 4 or
newer. AFAIK the latest FF for OS/2 assumes eCS 2, which comes down to reducing the community too.
In 99.99% of all cases, rounded up to 100%, I won't notice the use of
a i686 setting. I may not be aware of a marginal speed gain, and I'll
be using a i686+. Hence no complaints. But programmers should document
it, because the technical bottom line of the environment is OS/2's
386. Unless the use of a i686+s bleedin' obvious, like a browser which requires a little bit more RAM than the 32'ish MB of a 486. The
required memory requires a i686 or newer. FF38 works with a i686 CPU
or with a Pentium 4.
Or stop writing software for OS/2 and finally start writing software
for eCS. eCS comes down to i686+, and users of OS/2 Warp 4 FP9 will
be aware of restrictions, including possibly required APIs. An
acceptable bottom line of OS/2 FP versions is the Y2K FP.
BTW, internet services which require a registration of users to report
bugs will also reduce the number of complaints.
I have version 10 of that spec from 1996 and that is what it seems like to me.
I have read lots of switches for the various linkers we have and found nothing
to imply that any of them use the -march setting.
It looks as though I am out of luck unless anyone knows of another way to tell
how an exe or dll was compiled.
Dave P.
On 23-10-16 12:56, Lars Erdmann wrote:
I am fairly sure that the linker is supposed to correctly set this field.
But I would not be surprised if any linker ever written for OS/2 just ignores to
properly set this field.
Additionally the LX spec only defines these values for CPU type:
01H - 80286 or upwardly compatible CPU is required to execute this module. >> 02H - 80386 or upwardly compatible CPU is required to execute this module. >> 03H - 80486 or upwardly compatible CPU is required to execute this module. >>
That said, a 16-bit OS/2 linker will likely set this field to 01h whereas a >> 32-bit OS/2 linker will set it to 02h just to indicate that a
32-bit CPU is needed as a 16-bit CPU is not enough for OS/2 >= version 2.0. >>
Anything higher than a 486 is therefore undefined in the standard.
I don't think you can rely on anything set in this field.
Is anyone using Warp V4 FP12 or Warp V3?
At least for general use.
Note that according to the readme from the later kernels,
an i686 (Pentium Pro) is required to access more then
64MBs of memory using the int15 func e820 to avoid
problems on older CPU's.
How many are using such old hardware and need to run new
binaries?
most developers would build a 386 version on request.
In a nutshell, a 386 is the best default target. Same as the OS.
Actually it seems a 486SX is the minimum for Warp V4+fp15
I'll monitor the newsgroups as long as I can.
that would mean a lot of extra work for BWW.
On 22-10-16 19:32, Doug Bissett wrote:
On Sat, 22 Oct 2016 16:49:08 UTC, David Parsons
<dwparsons@t-online.de> wrote:
Hello all,
I upgraded a few DLLs used by Firefox & co to i686 standard recently and noticed
a significant performance improvement so I'm now want to update the rest to i686
also. I've been able to do some with the help of ANPM (it would be easier if
ANPM showed individual file dates & size) but I can not find a way to tell what
standard a particular file is at.
Using the YUM command line should tell you what you have installed.
Or, add "/DUP" as a parameter to ANPM. Just be careful what you select
when you install things.
ANPM has an outstanding request to add a feature to be able to change between versions, but it has not been done yet. Note that file date &
size means nothing, and individual files seem to have no information
about what version they are (no BLDLEVEL information). Some of them, apparently, have a way to query their version, but it seems that it is inconsistent, and you need to know exactly how to do it for different files.
Yes, bldlevel would be best if it was guaranteed to be updated whenever anything
changed, including the build flags, but that would mean a lot of extra work for BWW.
I appreciate that using the date & size information is not foolproof but it would be correct in most cases and requires little extra work.
What does the /DUP switch do, is there a list of switches available anywhere?
Dave P.
The latest FF assumes an eCS 2.x environment
The latest FF, which Bitwise compiled to target a i486
due to the need for the GCC atomics, runs fine on my
OS/2 ver 4+fp15
In general it doesn't. FF + OS/2 = missing DLLs. You had to eCS 2'ify
your OS/2 system, which requires quite a few extraordinary efforts
from an user's point of view (the requirements of requirements of requirements of requirements of required software). It will work, wehy
not, but I wouldn't sell it as OS/2 software. Your customers will
complain that they cannot find all DLLs, and the solution to obtain
those DLLs ... su^G^Gblows. Engineers (it works) vs humans. ;-)
The latest FF assumes an eCS 2.x environment
The latest FF, which Bitwise compiled to target a i486
due to the need for the GCC atomics, runs fine on my
OS/2 ver 4+fp15
I did end up buying eCS2.1 so I could take advantage
of both cores as Warp V4 has too many none SMP
safe libraries.
I target my Warp V4+fp15+other free fixes such as the 32 bit stack.
even the later refresh of Warp V4 needs an i486 to install
Are you using the 486?
I'd assume that it wouldn't support enough memory
to run Mozilla.
Generally when I port something, I do target the default i386
unless there is a reason to target newer.
There is a field in the LX header which specifies the CPU Type but all files show i386 even though I believe that many are at i686.
I've complied a test program here with 'gcc -march=i686 test.c' which compiles and runs successfully but it also shows the CPU type as i386.
Could it be that the -march switch is silently ignored in our gcc builds? For the record I used gcc v4.5.2.
It's the linkers job to set fields in the LX header, all GCC does is
produce the correct assembly code and obj files and ask the linker to
link them into an executable (or DLL).
Wlink could be patched (perhaps already is?) to use i686 in the CPU
field but I don't know if the kernel would be happy to load something
besides i386.
On 25-10-16 18:50, Doug Bissett wrote:
On Sun, 23 Oct 2016 16:35:43 UTC, David Parsons
<dwparsons@t-online.de> wrote:
What does the /DUP switch do, is there a list of switches available anywhere?
Dave P.
"/DUP" is undocumented, and is really meant for development, but it
does show the version that is installed.Try it, and it will make
sense. I don't know of any others.
Ok, It helps a bit but I presume the platform column shows which packages it has
available or has installed itself and not the file version on disk which was installed from a zip file before ANPM became available.
Still I think it will be helpful, thanks.
Dave P.
There's also the question of whether OS/2 actually supports SSE.
A testcase that was suggested to me was running the Flac testsuite in
twice as many sessions as the number of enabled cores. While one session passed the testcase, multiple sessions failed until only one was left
which passed.
This implies that the OS/2 kernel does not save the XMMS registers
during a context switch and SSE (and MMX?) code will fail if more then
one thread accesses the XMMS registers
FF + OS/2 = missing DLLs. You had to eCS 2'ify your OS/2
system, which requires quite a few extraordinary efforts from
an user's point of view (the requirements of requirements of
requirements of requirements of required software).
Make a statically linked version?
Paul Smedley wrote:
Hi Dave,
On 23/10/16 04:18, Dave Yeo wrote:
Our GCC does have a bug with SSE+ instructions, it doesn't align themThis might be fixed in GCC 6.x - do you have a simple testcase?
properly unless forced with -mstackrealign (there's also
-mpreferred-stack-boundary=4)
You should be able to just do some floating point math and compile with -msse2 -mfpmath=sse and see if it crashes with the xmms registers not aligned. Might want to target a newer architecture as well.
Dave
Hi Dave,
On 23/10/16 13:41, Dave Yeo wrote:
Paul Smedley wrote:OK, will try come up with a testcase, then re-test with GCC 6.2
Hi Dave,
On 23/10/16 04:18, Dave Yeo wrote:
Our GCC does have a bug with SSE+ instructions, it doesn't align themThis might be fixed in GCC 6.x - do you have a simple testcase?
properly unless forced with -mstackrealign (there's also
-mpreferred-stack-boundary=4)
You should be able to just do some floating point math and compile with
-msse2 -mfpmath=sse and see if it crashes with the xmms registers not
aligned. Might want to target a newer architecture as well.
Dave
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 445 |
Nodes: | 16 (0 / 16) |
Uptime: | 113:43:04 |
Calls: | 9,209 |
Calls today: | 8 |
Files: | 13,483 |
Messages: | 6,054,651 |