comment added
The problem is an invalid asm that is only triggered in 64-bit mode. It is fine in Lion branch, though adding an asm("":::"memory") in front of sync_lock_test_and_set doesn't hurt and future-proofs...
View Articlecomment added
Replying to bonzini@…: Please attach a preprocessed testcase (a *.i file obtained from gcc with the --save-temps option) and the output of adding -### to the gcc invocation. Output of GCC invoked with...
View Articlecomment added
Please attach a preprocessed testcase (a *.i file obtained from gcc with the --save-temps option) and the output of adding -### to the gcc invocation.
View Articlecomment added
Replying to mark@…: Replying to dsteffen@…: +#define dispatch_atomic_barrier() \ + asm volatile("" : : : "memory") This assembly instruction won't work on ARM or PPC, according to a similar function...
View Articlecomment added
Replying to dsteffen@…: +#define dispatch_atomic_barrier() \ + asm volatile("" : : : "memory") This assembly instruction won't work on ARM or PPC, according to a similar function in the Haskell source...
View Articlecomment added; owner set; status changed
status changed from new to accepted owner set to dsteffen@… GCC has apparently changed the __sync intrinsics to no longer be compiler barriers (nonsensical IMO since they are defined to generate...
View Articlecomment added
Disassebly of dispatch_async_f after fix: 0000000000006d50 <dispatch_async_f>: 6d50: 55 push %rbp 6d51: 48 89 e5 mov %rsp,%rbp 6d54: 48 89 5d d8 mov %rbx,0xffffffffffffffd8(%rbp) 6d58: 4c 89 65...
View Article