考拉加速器ios下载-红海pro加速器

                                  考拉加速器ios下载-红海pro加速器

                                  3 June 2024 8:39 PM (旋风加速器app官网 | gnu | 旋风专业版ios | igalia | scheme | baseline | optimizing | 旋风加速器ios下载二维码)

                                  网易UU加速器苹果电脑版|网易UU加速器 V1.4.3 Mac版 下载 ...:2021-3-31 · 网易UU加速器是专门为苹果电脑准备的额小巧轻简的极速网游加速器, 让你快速享受与朋友对战的乐趣,支持多种游戏加速,同时Mac版加速器具备免安装、免注册、免登录的特性,避免传统软件需要输入账号密码而带来的盗号风险。

                                  The new compiler is a "baseline compiler", in the spirit of what modern web browsers use to get things running quickly. It is a very simple compiler whose goal is speed of compilation, not speed of generated code.

                                  Honestly I didn't think Guile needed such a thing. Guile's distribution model isn't like the web, where every page you visit requires the browser to compile fresh hot mess; in Guile I thought it would be reasonable for someone to compile once and run many times. I was never happy with compile latency but I thought it was inevitable and anyway amortized over time. Turns out I was wrong on both points!

                                  The straw that broke the camel's back was 旋风ios, which defines the graph of all installable packages in an operating system using Scheme code. Lately it has been apparent that when you update the set of available packages via a "guix pull", Guix would spend too much time compiling the Scheme modules that contain the package graph.

                                  The funny thing is that it's not important that the package definitions be optimized; they just need to be compiled in a basic way so that they are quick to load. This is the essential use-case for a baseline compiler: instead of trying to make an optimizing compiler go fast by turning off all the optimizations, just write a different compiler that goes from a high-level intermediate representation straight to code.

                                  So that's what I did!

                                  it don't do much

                                  The baseline compiler skips any kind of flow analysis: there's no closure optimization, no contification, no unboxing of tagged numbers, no type inference, no control-flow optimizations, and so on. The only whole-program analysis that is done is a basic free-variables analysis so that closures can capture variables, as well as assignment conversion. Otherwise the baseline compiler just does a traversal over programs as terms of a simple tree intermediate language, emitting bytecode as it goes.

                                  Interestingly the quality of the code produced at optimization level -O0 is pretty much the same.

                                  This graph shows generated code performance of the CPS compiler relative to new baseline compiler, at optimization level 0. Bars below the line mean the CPS compiler produces slower code. Bars above mean CPS makes faster code. You can click and zoom in for details. Note that the Y axis is logarithmic.

                                  The tests in which -O0 CPS wins are mostly because the CPS-based compiler does a robust closure optimization pass that reduces allocation rate.

                                  At optimization level -O1, which adds partial evaluation over the high-level tree intermediate language and support for inlining "primitive calls" like + and so on, I am not sure why CPS peels out in the lead. No additional important optimizations are enabled in CPS at that level. That's probably something to look into.

                                  旋风专业版ios

                                  Note that the baseline of this graph is optimization level -O1, with the new baseline compiler.

                                  But as I mentioned, I didn't write the baseline compiler to produce fast code; I wrote it to produce code fast. So does it actually go fast?

                                  Well against the -O0 and -O1 configurations of the CPS compiler, it does excellently:

                                  Here you can see comparisons between what will be Guile 3.0.3's -O0 and -O1, compared against their equivalents in 3.0.2. (In 3.0.2 the -O1 equivalent is actually -O1 -Oresolve-primitives, if you are following along at home.) What you can see is that at these optimization levels, for these 8 files, the baseline compiler is around 4 times as fast.

                                  If we compare to Guile 3.0.3's default -O2 optimization level, or -O3, we see bigger disparities:

                                  旋风专业版ios

                                  Which is to say that Guile's baseline compiler runs at about 10x the speed of its optimizing compiler, which incidentally is similar to what I found for WebAssembly compilers a while back.

                                  Also of note is that -O0 and -O1 take essentially the same time, with -O1 often taking less time than -O0. This is because partial evaluation can make the program smaller, at a cost of being less straightforward to debug.

                                  Similarly, -O3 usually takes less time than -O2. This is because -O3 is allowed to assume top-level bindings that aren't exported from a module can be transformed to lexical bindings, which are more available for contification and inlining, which usually leads to smaller programs; it is a similar debugging/performance tradeoff to the -O0/-O1 case.

                                  But what does one gain when choosing to spend 10 times more on compilation? Here I have a gnarly graph that plots performance on some microbenchmarks for all the different optimization levels.

                                  Like I said, it's gnarly, but the summary is that -O1 typically gets you a factor of 2 or 4 over -O0, and -O2 often gets you another factor of 2 above that. -O3 is mostly the same as -O2 except in magical circumstances like the 旋风加速器专业版官网 case, where it adds an extra 16x or so over -O2.

                                  worse is better

                                  I haven't seen the numbers yet of this new compiler in Guix, but I hope it can have a good impact. Already in Guile itself though I've seen a couple interesting advantages.

                                  One is that because it produces code faster, Guile's boostrap from source can take less time. There is also a felicitous feedback effect in that because the baseline compiler is much smaller than the CPS compiler, it takes less time to macro-expand, which reduces bootstrap time (as bootstrap has to pay the cost of expanding the compiler, until the compiler is compiled).

                                  The second fortunate result is that now I can use the baseline compiler as an oracle for the CPS compiler, when I'm working on new optimizations. There's nothing worse than suspecting that your compiler miscompiled itself, after all, and having a second compiler helps keep me sane.

                                  stay safe, friends

                                  The code, you ask? Voici.

                                  Although this work has been ongoing throughout the past month, I need to add some words on the now before leaving you: there is a kind of cognitive dissonance between nerding out on compilers in the comfort of my home, rain pounding on the patio, and at the same time the world on righteous fire. I hope it is clear to everyone by now that the US police are an essentially racist institution: they harass, maim, and murder Black people at much higher rates than whites. My heart is with the protestors. Godspeed to you all, from afar. At the same time, all my non-Black readers should reflect on the ways they participate in systems that support white supremacy, and on strategies to tear them down. I know I will be. Stay safe, wear eye protection, and until next time: peace.

                                  考拉加速器ios下载-红海pro加速器

                                  14 April 2024 8:59 AM (igalia | compilers | firefox | spidermonkey | webassembly | bloomberg | v8 | javascriptcore)

                                  Greets! Today's article looks at browser WebAssembly implementations from a compiler throughput point of view. As I wrote in my article on Firefox's WebAssembly baseline compiler, web browsers have multiple wasm compilers: some that produce code fast, and some that produce fast code. Implementors are willing to pay the cost of having multiple compilers in order to satisfy these conflicting needs. So how well do they do their jobs? Why bother?

                                  In this article, I'm going to take the simple path and just look at code generation throughput on a single chosen WebAssembly module. Think of it as X-ray diffraction to expose aspects of the inner structure of the WebAssembly implementations in SpiderMonkey (Firefox), V8 (Chrome), and JavaScriptCore (Safari).

                                  experimental setup

                                  As a workload, I am going to use a version of the "Zen Garden" demo. This is a 40-megabyte game engine and rendering demo, originally released for 旋风加速器xf5app, and compiled to WebAssembly a couple years later. Unfortunately the original URL for the demo was disabled at some point in late 2024, so it no longer has a home on the web. A bit of a weird situation and I am not clear on licensing either. In any case I have a version downloaded, and have hacked out a minimal set of "imports" that the WebAssembly module needs from the host to allow the module to compile and link when run from a JavaScript shell, without requiring WebGL and similar facilities. So the benchmark is just to instantiate a WebAssembly module from the 40-megabyte byte array and see how long it takes. It would be better if I had more test cases (and would be happy to add them to the comparison!) but this is a start.

                                  I start by benchmarking the various WebAssembly implementations, firstly in their standard configuration and then setting special run-time flags to measure the performance of the component compilers. I run these tests on the core-rich machine that I use for browser development (2 旋风专业版ios CPUs for a total of 40 logical cores). The default-configuration numbers are therefore not indicative of performance on a low-end Android phone, but we can use them to extract aspects of the different implementations.

                                  Since I'm interested in compiler throughput, I'm not particularly concerned about how well a compiler will use all 40 cores. Therefore when testing the specific compilers I will set implementation-specific flags to disable parallelism in the compiler and GC: --single-threaded on V8, --no-threads on SpiderMonkey, and --useConcurrentGC=false --useConcurrentJIT=false on JSC. To further restrict any threads that the implementation might decide to spawn, I'll bind these to a single core on my machine using taskset -c 4. Otherwise the machine is in its normal configuration (nothing else significant running, all cores available for scheduling, turbo boost enabled).

                                  I'll express results in nanoseconds per WebAssembly code byte. Of the 40 megabytes or so in the Zen Garden demo, only 23 891 164 bytes are actually function code; the rest is mostly static data (textures and so on). So I'll divide the total time by this code byte count.

                                  I tested V8 at git revision 0961376575206, SpiderMonkey at hg revision 8ec2329bef74, and JavaScriptCore at subversion revision 259633. The benchmarks can be run using just a shell; see the pull request. I timed how long it took to instantiate the Zen Garden demo, ensuring that a basic export was callable. I collected results from 20 separate runs, sleeping a second between them. The bars in the charts below show the median times, with a histogram overlay of all results.

                                  results & analysis

                                  We can see some interesting results in this graph. Note that the Y axis is logarithmic. The "concurrent tiering" results in the graph correspond to the default configurations (no special flags, no taskset, all cores available).

                                  The first interesting conclusions that pop out for me concern JavaScriptCore, which is the only implementation to have a baseline interpreter (run using --useWasmLLInt=true --useBBQJIT=false --useOMGJIT=false). JSC's WebAssembly interpreter is actually structured as a compiler that generates custom WebAssembly-specific bytecode, which is then run by a custom interpreter built using the same infrastructure as JSC's JavaScript interpreter (the LLInt). Directly interpreting WebAssembly might be possible as a low-latency implementation technique, but since you need to validate the WebAssembly anyway and eventually tier up to an optimizing compiler, apparently it made sense to emit fresh bytecode.

                                  The part of JSC that generates baseline interpreter code runs slower than SpiderMonkey's baseline compiler, so one is tempted to wonder why JSC bothers to go the interpreter route; but then we recall that on iOS, we can't generate machine code in some contexts, so the LLInt does appear to address a need.

                                  One interesting feature of the LLInt is that it allows tier-up to the optimizing compiler directly from loops, which neither V8 nor SpiderMonkey support currently. Failure to tier up can be quite confusing for users, so good on JSC hackers for implementing this.

                                  网易UU加速器苹果电脑版|网易UU加速器 V1.4.3 Mac版 下载 ...:2021-3-31 · 网易UU加速器是专门为苹果电脑准备的额小巧轻简的极速网游加速器, 让你快速享受与朋友对战的乐趣,支持多种游戏加速,同时Mac版加速器具备免安装、免注册、免登录的特性,避免传统软件需要输入账号密码而带来的盗号风险。

                                  JavaScriptCore's baseline compiler (run using --useWasmLLInt=false --useBBQJIT=true --useOMGJIT=false) runs much more slowly than SpiderMonkey's or V8's baseline compiler, which I think can be attributed to the fact that it builds a graph of basic blocks instead of doing a one-pass compile. To me these results validate SpiderMonkey's and V8's choices, looking strictly from a latency perspective.

                                  I don't have graphs for code generation throughput of JavaSCriptCore's optimizing compiler (run using --useWasmLLInt=false --useBBQJIT=false --useOMGJIT=true); it turns out that JSC wants one of the lower tiers to be present, and will only tier up from the LLInt or from BBQ. Oh well!

                                  V8 and SpiderMonkey, on the other hand, are much of the same shape. Both implement a streaming baseline compiler and an optimizing compiler; for V8, we get these via --liftoff --no-wasm-tier-up or --no-liftoff, respectively, and for SpiderMonkey it's --wasm-compiler=baseline or --wasm-compiler=ion.

                                  Here we should conclude directly that SpiderMonkey generates code around twice as fast as V8 does, in both tiers. SpiderMonkey can generate machine code faster even than JavaScriptCore can generate bytecode, and optimized machine code faster than JSC can make baseline machine code. It's a very impressive result!

                                  Another conclusion concerns the efficacy of tiering: for both V8 and SpiderMonkey, their baseline compilers run more than 10 times as fast as the optimizing compiler, and the same ratio holds between JavaScriptCore's baseline interpreter and compiler.

                                  Finally, it would seem that the current cross-implementation benchmark for lowest-tier code generation throughput on a desktop machine would then be around 50 ns per WebAssembly code byte for a single core, which corresponds to receiving code over the wire at somewhere around 160 megabits per second (Mbps). If we add in concurrency and manage to farm out compilation tasks well, we can obviously double or triple that bitrate. Optimizing compilers run at least an order of magnitude slower. We can conclude that to the desktop end user, WebAssembly compilation time is indistinguishable from download time for the lowest tier. The optimizing tier is noticeably slower though, running more around 10-15 Mbps per core, so time-to-tier-up is still a concern for faster networks.

                                  Going back to the question posed at the start of the article: yes, tiering shows a clear benefit in terms of WebAssembly compilation latency, letting users interact with web sites sooner. So that's that. Happy hacking and until next time!

                                  考拉加速器ios下载-红海pro加速器

                                  8 April 2024 9:02 AM (igalia | compilers | firefox | spidermonkey | webassembly | bloomberg)

                                  Hey hey hey! Hope everyone is staying safe at home in these weird times. Today I have a final dispatch on the implementation of the multi-value feature for WebAssembly in Firefox. Last week I wrote about multi-value in blocks; this week I cover function calls.

                                  on the boundaries between things

                                  In my article on Firefox's baseline compiler, I mentioned that all WebAssembly engines in web browsers treat the function as the unit of compilation. This facilitates streaming, parallel compilation of WebAssembly modules, by farming out compilation of individual functions to worker threads. It also allows for easy tier-up from quick-and-dirty code generated by the low-latency baseline compiler to the faster code produced by the optimizing compiler.

                                  There are some interesting Conway's Law implications of this choice. One is that division of compilation tasks becomes an opportunity for division of human labor; there is a whole team working on the experimental Cranelift compiler that could replace the optimizing tier, and in my hackings on Firefox I have had minimal interaction with them. To my detriment, of course; they are fine people doing interesting things. But the code boundary means that we don't need to communicate as we work on different parts of the same system.

                                  Boundaries are where places touch, and sometimes for fluid crossing we have to consider boundaries as places in their own right. Functions compiled with the baseline compiler, with Ion (the production optimizing compiler), and with Cranelift (the experimental optimizing compiler) are all able to call each other because they actively maintain a common boundary, a binary interface (ABI). (Incidentally the A originally stands for "application", essentially reflecting division of labor between groups of people making different components of a software system; Conway's Law again.) Let's look closer at this boundary-place, with an eye to how it changes with multi-value.

                                  what's in an ABI?

                                  Among other things, an ABI specifies a calling convention: which arguments go in registers, which on the stack, how the stack values are represented, how results are returned to the callers, which registers are preserved over calls, and so on. Intra-WebAssembly calls are a closed world, so we can design a custom ABI if we like; that's what V8 does. Sometimes WebAssembly may call functions from the run-time, though, and so it may be useful to be closer to the C++ ABI on that platform (the "native" ABI); that's what Firefox does. (Incidentally here I think Firefox is probably leaving a bit of performance on the table on Windows by using the 旋风加速器app官网 that only allows four register parameters. I haven't measured though so perhaps it doesn't matter.) Using something closer to the native ABI makes debugging easier as well, as native debugger tools can apply more easily.

                                  One thing that most native ABIs have in common is that they are really only optimized for a single result. This reflects their heritage as artifacts from a world built with C and C++ compilers, where there isn't a concept of a function with more than one result. If multiple results are required, they are represented instead as arguments, typically as pointers to memory somewhere. Consider the AMD64 SysV ABI, used on Unix-derived systems, which carefully specifies how to pass arbitrary numbers of arbitrary-sized data structures to a function (§3.2.3), while only specifying what to do for a single return value. If the return value is too big for registers, the ABI specifies that a pointer to result memory be passed as an argument instead.

                                  So in a multi-result WebAssembly world, what are we to do? How should a function return multiple results to its caller? Let's assume that there are some finite number of general-purpose and floating-point registers devoted to return values, and that if the return values will fit into those registers, then that's where they go. The problem is then to determine which results will go there, and if there are remaining results that don't fit, then we have to put them in memory. The ABI should indicate how to address that memory.

                                  旋风跑跑ios版下载_旋风跑跑官网ios版 v2.2.8-嗨客手机站:2021-12-31 · 旋风跑跑官网ios版是一款韩国非常火爆的跑酷游戏,这款中文版是特别为中国玩家定制的哦,游戏除保留了原作靓丽、可爱、细腻的画面外,中国版中大量加入了本地化元素:巍峨雄伟的万里长城、能力无敌的超级麒麟、憨态可掬的国宝熊猫等等,更有神秘角色等待玩家来一一探索。

                                  first thought: stack results precede stack arguments

                                  When a function needs some of its arguments passed on the stack, it doesn't receive a pointer to those arguments; rather, the arguments are placed at a well-known offset to the stack pointer.

                                  We could do the same thing with stack results, either reserving space deeper on the stack than stack arguments, or closer to the stack pointer. With the advent of tail calls, it would make more sense to place them deeper on the stack. Like this:

                                  The diagram above shows the ordering of stack arguments as implemented by Firefox's WebAssembly compilers: later arguments are deeper (farther from the stack pointer). It's an arbitrary choice that happens to match up with what the native ABIs do, as it was easier to re-use bits of the already-existing optimizing compiler that way. (Native ABIs use this stack argument ordering because of sloppiness in a version of C from before I was born. If you were starting over from scratch, probably you wouldn't do things this way.)

                                  Stack result order does matter to the baseline compiler, though. It's easier if the stack results are placed in the same order in which they would be pushed on the virtual stack, so that when the function completes, the results can just be memmove'd down into place (if needed). The same concern dictates another aspect of our ABI: unlike calls, registers are allocated to the last results rather than the first results. This is to make it easy to preserve stack invariant (1) from the previous article.

                                  病毒攻击防护软件下载_一键清理系统垃圾_文档保护,文档安全 ...:2021-2-20 · 腾讯电脑管家官方网站产品栏目,提供正版电脑管家软件下载,病毒攻击防护软件下载,一键清理系统垃圾,文档保护,文档安全;文档守护者,全面保护电脑安全财产安全。拥有软件升级、卸载、拦截等多种能力,自主查杀引擎让软件问题变得简单;更多最新产品,请关注腾讯电脑管家产品栏目。

                                  While a stack argument is logically consumed by a call, a stack result starts life with a call. As such, if you reserve space for stack results just by decrementing the stack pointer before a call, probably you will need to load the results eagerly into registers thereafter or shuffle them into other positions to be able to free the allocated stack space.

                                  Eager shuffling is busy-work that should be avoided if possible. It's hard to avoid in the baseline compiler. For example, a call to a function with 10 arguments will consume 10 values from the temporary stack; any results will be pushed on after removing argument values from the stack. If there any stack results, it's almost impossible to avoid a post-call memmove, to move stack results to where they should be before the 10 argument values were pushed on (and probably spilled). So the baseline compiler case is not optimal.

                                  However, things get gnarlier with the Ion optimizing compiler. Like many other optimizing compilers, Ion is designed to compute the necessary stack frame size ahead of time, and to never move the stack pointer during an activation. The only exception is for pushing on any needed stack arguments for nested calls (which are popped directly after the nested call). So in that case, assuming there are a number of multi-value calls in a stack frame, we'll be shuffling in the optimizing compiler as well. Not great.

                                  Besides the need to shuffle, stack arguments and stack results differ as regards ownership and garbage collection. A callee "owns" the memory for its stack arguments; it is responsible for them. The caller can't assume anything about the contents of that memory after a call, especially if the WebAssembly implementation supports tail calls (a whole 'nother blog post, that). If the values being passed are just bits, that's one thing, but with the reference types proposal, some result values may be managed by the garbage collector. The callee is responsible for making stack arguments visible to the garbage collector; the 旋风加速器.apk is responsible for the results. The caller will need to emit metadata to allow the garbage collector to see stack result references. For this reason, a stack result actually starts life just before a call, because it can become initialized at any point and thus needs to be traced during the entire callee activation. Not all callers can easily add garbage collection roots for writable stack slots, so the need to place stack results in a fixed position complicates calling multi-value WebAssembly functions in some cases (e.g. from C++).

                                  second thought: pointers to individual stack results

                                  坚果nuts加速器官网 - 好看123:2021-6-14 · 网站介绍:【独家优惠:买1年送3个月】坚果nuts加速器官网提供坚果nuts苹果IOS、坚果app安卓,PC,Mac,iOS,Android,Linux等坚果app下载地址服务。立即购买坚果nuts享受年付赠送3... 9.坚果加速器破解版nuts坚果加速器破解版永久免费app下载v501 点击前往

                                  int64_t foo(int64_t* a, int64_t* b) {
                                    *a = 1;
                                    *b = 2;
                                    return 3;
                                  }
                                  void call_foo(void) {
                                    int64 a, b, c;
                                    c = foo(&a, &b);
                                  }
                                  

                                  This program shows us a possibility for encoding WebAssembly's multiple return values: pass an additional argument for each stack result, pointing to the location to which to write the stack result. Like this:

                                  旋风加速器官网下载

                                  The result pointers are normal arguments, subject to normal argument allocation. In the above example, given that there are already stack arguments, they will probably be passed on the stack, but in many cases the stack result pointers may be passed in registers.

                                  The result locations themselves don't even need to be on the stack, though they certainly will be in intra-WebAssembly calls. However the ability to write to any memory is a useful form of flexibility when e.g. calling into WebAssembly from C++.

                                  苹果旋风加速器app下载:2021-5-11 · iu9软件商店分享苹果旋风加速器app下载相关的手机app,编辑为您推荐苹果旋风加速器app下载最新信息。苹果旋风加速器app下载是iu9软件商店为您推送的应用,找苹果旋风加速器app下载…

                                  third thought: stack result area, passed as pointer

                                  Given that stack results are going to be written to memory, it doesn't really matter where they will be written, from the perspective of the optimizing compiler at least. What if we allocated them all in a block and just passed one pointer to the block? Like this:

                                  Here there's just one additional argument, no matter how many stack results. While we're at it, we can specify that the layout of the stack arguments should be the same as how they would be written to the baseline stack, to make the baseline compiler's job easier.

                                  As I started implementation with the baseline compiler, I chose this third approach, essentially because I was already allocating space for the results in a block in this way by bumping the stack pointer.

                                  When I got to the optimizing compiler, however, it was quite difficult to convince Ion to allocate an area on the stack of the right shape.

                                  Looking back on it now, I am not sure that I made the right choice. The thing is, the IonMonkey compiler started life as an optimizing compiler for JavaScript. It can represent unboxed values, which is how it came to be used as a compiler for asm.js and later WebAssembly, and it does a good job on them. However it has never had to represent aggregate data structures like a C++ class, so it didn't have support for spilling arbitrary-sized data to the stack. It took a while staring at the register allocator to convince it to allocate arbitrary-sized stack regions, and then to allocate component scalar values out of those regions. If I had just asked the register allocator to give me one appropriate-sized stack slot for each scalar, and hacked out the ability to pass separate pointers to the stack slots to WebAssembly calls with stack results, then I would have had an easier time of it, and perhaps stack slot allocation could be more dense because multiple results wouldn't need to be allocated contiguously.

                                  As it is, I did manage to hack it in, and I think in a way that doesn't regress. I added a layer over an argument type vector that adds a synthetic stack results pointer argument, if the function returns stack results; iterating over this type with ABIArgIter will allocate a stack result area pointer, either as a register argument or a stack argument. In the optimizing compiler, I added add a kind of value allocation coresponding to a variable-sized stack area, (using pointer tagging again!), and extended the register allocator to allocate LStackArea, and the component stack results. Interestingly, I had to add a kind of definition that starts life on the stack; previously all Ion results started life in registers and were only spilled if needed.

                                  In the end, a function will capture the incoming stack result area argument, either as a normal SSA value (for Ion) or stored to a stack slot (baseline), and when returning will write stack results to that pointer as appropriate. Passing in a pointer as an argument did make it relatively easy to implement calls from WebAssembly to and from C++, getting the variable-shape result area to be known to the garbage collector for C++-to-WebAssembly calls was simple in the end but took me a while to figure out.

                                  Finally I was a bit exhausted from multi-value work and ready to walk away from the "JS API", the bit that allows multi-value WebAssembly functions to be called from JavaScript (they return an array) or for a JavaScript function to return multiple values to WebAssembly (via an iterable) -- but then when I got to thinking about this blog post I preferred to implement the feature rather than document its lack. Avoidance-of-document-driven development: it's a thing!

                                  towards deployment

                                  As I said in the last article, the multi-value feature is about improved code generation and also making a more capable base for expressing further developments in the WebAssembly language.

                                  As far as code generation goes, things are progressing but it is still early days. Thomas Lively has implemented support in LLVM for emitting return of C++ aggregates via multiple results, which is enabled via the -experimental-multivalue-abi cc1 flag. Thomas has also been implementing multi-value support in the binaryen WebAssembly toolchain component, used by the emscripten C++-to-WebAssembly toolchain. I think it will be a few months though before everything lands in a way that end users can take advantage of.

                                  On the specification side, the multi-value feature is now at phase 4 since January, which basically means things are all done there.

                                  Implementation-wise, V8 has had experimental support since 2017 or so, and the feature was staged last fall, although V8 doesn't yet support multi-value in their baseline compiler. WebKit also landed support last fall.

                                  Unlike V8 and SpiderMonkey, JavaScriptCore (the JS and wasm engine in WebKit) actually implements a WebAssembly interpreter as their solution to the one-pass streaming compilation problem. Then on the compiler side, there are two tiers that both operate on basic block graphs (OMG and BBQ; I just puked a little in my mouth typing that). This strategy makes the compiler implementation quite straightforward. It's also an interesting design point because JavaScriptCore's garbage collector scans the stack conservatively; there's no need for the compiler to do bookkeeping on the GC's behalf, which I'm sure was a relief to the hacker. Anyway, multi-value in WebKit is done too.

                                  The new thing of course is that finally, in Firefox, the feature is now fully implemented (woo) and enabled by default on Nightly builds (woo!). I did that! It took me a while! Perhaps too long? Anyway it's done. Thanks again to 旋风专业版ios for supporting this work; large ups to y'all for helping the web move forward.

                                  See you next time with a more general article rounding up compile-time benchmarks on a variety of WebAssembly implementations. Until then, happy hacking!

                                  考拉加速器ios下载-红海pro加速器

                                  3 April 2024 10:56 AM (igalia | 旋风加速器xf5app | firefox | spidermonkey | webassembly | bloomberg)

                                  Greetings, hackers! Today I'd like to write about something I worked on recently: implementation of the 旋风加速器xf5app future feature of WebAssembly in Firefox, as sponsored by 旋风加速器xf5app.

                                  In the "minimum viable product" version of WebAssembly published in 2018, there were a few artificial restrictions placed on the language. Functions could only return a single value; if a function would naturally return two values, it would have to return at least one of them by writing to memory. Loops couldn't take parameters; any loop state variables had to be stored to and loaded from indexed local variables at each iteration. Similarly, any block that would naturally return more than one result would also have to do so via locals.

                                  This restruction is lifted with the multi-value proposal. Function types now map from result type to result type, where a result type is a sequence of value types. That is to say, just as functions can take multiple arguments, they can return multiple results. Similarly, with the multi-value proposal, block types are now the same as function types: loops and blocks can take arguments and return any number of results. This change improves the expressiveness of WebAssembly as a compilation target; a C++ program compiled to multi-value WebAssembly can be encoded in fewer bytes than before. Multi-value also establishes a base for other language extensions. For example, the exception handling proposal builds on multi-value to pass multiple values to catch blocks.

                                  So, that's multi-value. You would think that relaxing a restriction would be easy, but you'd be wrong! This task took me 5 months and had a number of interesting gnarly bits. This article is part one of two about interesting aspects of implementing multi-value in Firefox, specifically focussing on blocks. We'll talk about multi-value function calls next week.

                                  multi-value in blocks

                                  In the last article, I presented the basic structure of Firefox's WebAssembly support: there is a baseline compiler optimized for low latency and an optimizing compiler optimized for throughput. (There is also Cranelift, a new experimental compiler that may replace the current implementation of the optimizing compiler; but that doesn't affect the basic structure.)

                                  The optimizing compiler applies traditional compiler techniques: SSA graph construction, where values flow into and out of graphs using the usual defs-dominate-uses relationship. The only control-flow joins are loop entry and (possibly) block exit, so the addition of loop parameters means in multi-value there are some new phi variables in that case, and the expansion of block result count from [0,1] to [0,n] means that you may have more block exit phi variables. But these compilers are built to handle these situations; you just build the SSA and let the optimizing compiler go to town.

                                  The problem comes in the baseline compiler.

                                  from 1 to n

                                  Recall that the baseline compiler is optimized for compiler speed, not compiled speed. If there are only ever going to be 0 or 1 result from a block, for example, the baseline compiler's internal data structures will use something like a 旋风加速器ios下载二维码 to represent that block result.

                                  If you then need to expand this to hold a vector of values, the naïve approach of using a Vector<ValType> would mean heap allocation and indirection, and thus would regress the baseline compiler.

                                  In this case, and in many other similar cases, the solution is to use value tagging to represent 0 or 1 value type directly in a word, and the general case by linking out to an external vector. As block types are function types, they actually appear as function types in the WebAssembly type section, so they are already parsed; the BlockType in that case can just refer out to already-allocated memory.

                                  In fact this value-tagging pattern applies all over the place. (The jit/ links above are for the optimizing compiler, but they relate to function calls; will write about that next week.) I have a bit of pause about value tagging, in that it's gnarly complexity and I didn't measure the speed of alternative implementations, but it was a useful migration strategy: value tagging minimizes performance risk to existing specialized use cases while adding support for new general cases. Gnarly it is, then.

                                  control-flow joins

                                  I didn't mention it in the last article, but there are two important invariants regarding stack discipline in the baseline compiler. Recall that there's a virtual stack, and that some elements of the virtual stack might be present on the machine stack. There are four kinds of virtual stack entry: register, constant, local, and spilled. Locals indicate local variable reads and are mostly like registers in practice; when registers spill to the stack, locals do too. (Why spill to the temporary stack instead of leaving the value in the local variable slot? Because locals are mutable. A local.get captures a local variable value at its point of execution. If future code changes the local variable value, you wouldn't want the captured value to change.)

                                  Digressing, the stack invariants:

                                  1. Spilled values precede registers and locals on the virtual stack. If u and v are virtual stack entries and u is older than v, then if u is in a register or is a local, then v is not spilled.

                                  2. Older values precede newer values on the machine stack. Again for u and v, if they are both spilled, then u will be farther from the stack pointer than v.

                                  There are five fundamental stack operations in the baseline compiler; let's examine them to see how the invariants are guaranteed. Recall that before multi-value, targets of non-local exits (e.g. of the br instruction) could only receive 0 or 1 value; if there is a value, it's passed in a well-known register (e.g. %rax or %xmm0). (On 32-bit machines, 64-bit values use a well-known pair of registers.)

                                  push(v)
                                  Results of WebAssembly operations never push spilled values, neither onto the virtual nor the machine stack. v is either a register, a constant, or a reference to a local. Thus we guarantee both (1) and (2).
                                  pop() -> v
                                  Doesn't affect older stack entries, so (1) is preserved. If the newest stack entry is spilled, you know that it is closest to the stack pointer, so you can pop it by first loading it to a register and then incrementing the stack pointer; this preserves (2). Therefore if it is later pushed on the stack again, it will not be as a spilled value, preserving (1).
                                  spill()
                                  旋风加速器下载:2021-4-26 · 旋风加速器 苹果下载 安卓下载 旋风加速器安卓版 版本号:1.1.5 时间:06-16 下载 旋风加速器 安卓版 ... rocket加速器 神龟加速器官网 评论 共有 5 条评论 芝麻雨哥哥 新款苹果,用什么看电视都没 …
                                  return(height, v)
                                  This is the stack operation corresponding to a block exit (local or nonlocal). We drop items from the virtual and machine stack until the stack height is height. In WebAssembly 1.0, if the target continuation takes a value, then the jump passes a value also; in that case, before popping the stack, v is placed in a well-known register appropriate to the value type. Note however that v is not pushed on the virtual stack at the return point. Popping the virtual stack preserves (1), because a stack and its prefix have the same invariants; popping the machine stack also preserves (2).
                                  capture(t)
                                  Whereas return operations happen at block exits, capture operations happen at the target of block exits (the continuation). If no value is passed to the continuation, a capture is a no-op. If a value is passed, it's in a register, so we just push that register onto the virtual stack. Both invariants are obviously preserved.

                                  Note that a value passed to a continuation via return() has a brief instant in which it has no name -- it's not on the virtual stack -- but only a location -- it's in a well-known place. capture() then gives that floating value a name.

                                  Relatedly, there is another invariant, that the allocation of old values on block entry is the same as their allocation on block exit, so that all predecessors of the block exit flow all values via the same places. This is preserved by spilling on block entry. It's a big hammer, but effective.

                                  So, given all this, how do we pass multiple values via return()? We don't have unlimited registers, so the %rax strategy isn't going to work.

                                  The answer for the baseline compiler is informed by our lean into the stack machine principle. Multi-value returns are allocated in such a way that a capture() can push them onto the virtual stack. Because spilled values must precede registers, we therefore allocate older results on the stack, and put the last result in a register (or register pair for i64 on 32-bit platforms). Note that it's possible in theory to allocate multiple results to registers; we'll touch on this next week.

                                  Therefore the implementation of return(height, v1..vn) is straightforward: we first pop register results, then spill the remaining virtual stack items, then shuffle stack results down towards height. This should result in a 旋风加速官网下载ios of contiguous stack results towards the frame pointer. However because const values aren't present on the machine stack, depending on the stack height difference, it may mean a split between moving some values toward the frame pointer and some towards the stack pointer, then filling in by spilling constants. It's gnarly, but it is what it is. Note that the links to the return and capture implementations above are to the post-multi-value world, so you can see all the details there.

                                  that's it!

                                  In summary, the hard part of multi-value blocks was reworking internal compiler data structures to be able to represent multi-value block types, and then figuring out the low-level stack manipulations in the baseline compiler. The optimizing compiler on the other hand was pretty easy.

                                  When it comes to calls though, that's another story. We'll get to that one next week. Thanks again to 旋风加速官网下载ios for supporting this work; I'm really delighted that Igalia and Bloomberg have been working together for a long time (coming on 10 years now!) to push the web platform forward. A special thanks also to Mozilla's Lars Hansen for his patience reviewing these patches. Until next week, then, stay at home & happy hacking!

                                  firefox's low-latency webassembly compiler

                                  旋风加速官网

                                  Good day!

                                  Today I'd like to write a bit about the WebAssembly baseline compiler in Firefox.

                                  background: throughput and latency

                                  WebAssembly, as you know, is a virtual machine that is present in web browsers like Firefox. An important initial goal for WebAssembly was to be a good target for compiling programs written in C or C++. You can visit a web page that includes a program written in C++ and compiled to WebAssembly, and that WebAssembly module will be downloaded onto your computer and run by the web browser.

                                  A good virtual machine for C and C++ has to be fast. The throughput of a program compiled to WebAssembly (the amount of work it can get done per unit time) should be approximately the same as its throughput when compiled to "native" code (x86-64, ARMv7, etc.). WebAssembly meets this goal by defining an instruction set that consists of similar operations to those directly supported by CPUs; WebAssembly implementations use optimizing compilers to translate this portable instruction set into native code.

                                  There is another dimension of fast, though: not just work per unit time, but also time until first work is produced. If you want to go play Doom 3 on the web, you care about frames per second but also time to first frame. Therefore, WebAssembly was designed not just for high throughput but also for low latency. This focus on low-latency compilation expresses itself in two ways: binary size and binary layout.

                                  On the size front, WebAssembly is optimized to encode small files, reducing download time. One way in which this happens is to use a variable-length encoding anywhere an instruction needs to specify an integer. In the usual case where, for example, there are fewer than 128 local variables, this means that a local.get instruction can refer to a local variable using just one byte. Another strategy is that WebAssembly programs target a stack machine, reducing the need for the instruction stream to explicitly load operands or store results. Note that size optimization only goes so far: it's assumed that the bytes of the encoded module will be compressed by gzip or some other algorithm, so sub-byte entropy coding is out of scope.

                                  On the layout side, the WebAssembly binary encoding is sorted by design: definitions come before uses. For example, there is a section of type definitions that occurs early in a WebAssembly module. Any use of a declared type can only come after the definition. In the case of functions which are of course mutually recursive, function type declarations come before the actual definitions. In theory this allows web browsers to take a one-pass, streaming approach to compilation, starting to compile as functions arrive and before download is complete.

                                  implementation strategies

                                  The goals of high throughput and low latency conflict with each other. To get best throughput, a compiler needs to spend time on code motion, register allocation, and instruction selection; to get low latency, that's exactly what a compiler should not do. Web browsers therefore take a two-pronged approach: they have a compiler optimized for throughput, and a compiler optimized for latency. As a WebAssembly file is being downloaded, it is first compiled by the quick-and-dirty low-latency compiler, with the goal of producing machine code as soon as possible. After that "baseline" compiler has run, the "optimizing" compiler works in the background to produce high-throughput code. The optimizing compiler can take more time because it runs on a separate thread. When the optimizing compiler is done, it replaces the baseline code. (The actual heuristics about whether to do baseline + optimizing ("tiering") or just to go straight to the optimizing compiler are a bit hairy, but this is a summary.)

                                  This article is about the WebAssembly baseline compiler in Firefox. It's a surprising bit of code and I learned a few things from it.

                                  design questions

                                  Knowing what you know about the goals and design of WebAssembly, how would you implement a low-latency compiler?

                                  It's a question worth thinking about so I will give you a bit of space in which to do so.

                                  .

                                  .

                                  .

                                  After spending a lot of time in Firefox's WebAssembly baseline compiler, I have extracted the following principles:

                                  1. 蓝鲸加速器官方网站-免费加速器|无限流量 |不限速|8年资深品牌:蓝鲸加速器官网,无限流量、免费使用,不限速,无忧观赏超高清1080P或4K视频,支持所有聊天软件及手游,支持Netflix、HBO、TVB等,无日志政策,双重256位加密,1000兆带宽,支持ios、安卓、Android、Windows、Mac上使用。拥有2021+服务器分布全球各地区,稳定 ...

                                  2. One pass, and one pass only

                                  3. Lean into the stack machine

                                  4. 旋风加速器app官网

                                  In the remainder of this article we'll look into these individual points. Note, although I have done a good bit of hacking on this compiler, its design and original implementation comes mainly from Mozilla hacker Lars Hansen, who also currently maintains it. All errors of exegesis are mine, of course!

                                  the function is the unit of compilation

                                  As we mentioned, in the binary encoding of a WebAssembly module, all definitions needed by any function come before all function definitions. This naturally leads to a partition between two phases of bytestream parsing: an initial serial phase that collects the set of global type definitions, annotations as to which functions are imported and exported, and so on, and a subsequent phase that compiles individual functions in an essentially independent manner.

                                  The advantage of this approach is that compiling functions is a natural task unit of parallelism. If the user has a machine with 8 virtual cores, the web browser can keep one or two cores for the browser itself and farm out WebAssembly compilation tasks to the rest. The result is that the compiled code is available sooner.

                                  荒野乱斗安卓版国际服在哪下?IOS国际服在哪下?_搜一搜 ...:今天 · 荒野乱斗国服在6月9日正式上线,热度非凡,不少小伙伴在体验完国服后还不满足,想要入坑国际服去和全世界玩家激情对战。我伔就为想要安装国际服却无从下手的玩家伔准备了详细的入坑攻略,解决荒野乱斗国际服在哪下的问题。搜一搜手游网为您提供最新,最全,热门手游攻略资讯!

                                  This simple approach does have some down-sides, in that it leaves intraprocedural optimizations on the table (inlining, contification, custom calling conventions, speculative optimizations). This is mitigated in two ways, the most obvious being that LLVM or whatever produced the WebAssembly has ideally already done whatever inlining might be fruitful. The second is that WebAssembly is designed for predictable performance. In JavaScript, an implementation needs to do run-time type feedback and speculative optimizations to get good performance, but the result is that it can be hard to understand why a program is fast or slow. The designers and implementers of WebAssembly in browsers all had first-hand experience with JavaScript virtual machines, and actively wanted to avoid unpredictable performance in WebAssembly. Therefore there is currently a kind of détente among the various browser vendors, that everyone has agreed that they won't do speculative inlining -- yet, anyway. Who knows what will happen in the future, though.

                                  Digressing, the summary here is that the baseline compiler receives an individual function body as input, and generates code just for that function.

                                  one pass, and one pass only

                                  The WebAssembly baseline compiler makes one pass through the bytecode of a function. Nowhere in all of this are we going to build an abstract syntax tree or a graph of basic blocks. Let's follow through how that works.

                                  Firstly, emitFunction simply emits a prologue, then the body, then an epilogue. emitBody is basically a big loop that consumes opcodes from the instruction stream, dispatching to opcode-specific code emitters (e.g. emitAddI32).

                                  The opcode-specific code emitters are also responsible for validating their arguments; for example, emitAddI32 is wrapped in an assertion that there are two i32 values on the stack. This validation logic is shared by a templatized codestream iterator so that it can be re-used by the optimizing compiler, as well as by the publicly-exposed WebAssembly.validate function.

                                  A corollary of this approach is that machine code is emitted in bytestream order; if the WebAssembly instruction stream has an 旋风加速器专业版官网 followed by a i32.sub, then the machine code will have an addl followed by a subl.

                                  WebAssembly has a syntactically limited form of non-local control flow; it's not goto. Instead, instructions are contained in a tree of nested control blocks, and control can only exit nonlocally to a containing control block. There are three kinds of control blocks: jumping to a block or an if will continue at the end of the block, whereas jumping to a loop will continue at its beginning. In either case, as the compiler keeps a stack of nested control blocks, it has the set of valid jump targets and can use the usual assembler logic to patch forward jump addresses when the compiler gets to the block exit.

                                  lean into the stack machine

                                  This is the interesting bit! So, WebAssembly instructions target a stack machine. That is to say, there's an abstract stack onto which evaluating i32.const 32 pushes a value, and if followed by i32.const 10 there would then be i32(32) | i32(10) on the stack (where new elements are added on the right). A subsequent i32.add would pop the two values off, and push on the result, leaving the stack as i32(42). There is also a fixed set of local variables, declared at the beginning of the function.

                                  The easiest thing that a compiler can do, then, when faced with a stack machine, is to emit code for a stack machine: as values are pushed on the abstract stack, emit code that pushes them on the machine stack.

                                  xf5旋风加速_xf5旋风加速最新资讯:2021-5-24 · 旋风加速器ios下载(xf5旋风加速器游戏加速v5.0.1)_橙光游戏网 2021年3月10日 - 觉得自己手机玩游戏超级的卡顿,那你就可众在xf5旋风加速器app进行更好的游戏加速环节哦,这款app有着专业节点,将会快速的为你进行加速,你可众不断的...

                                  Turns out -- yes! The baseline compiler keeps an abstract value stack as it compiles. For example, compiling i32.const 32 pushes nothing on the machine stack: it just adds a ConstI32 node to the value stack. When an instruction needs an operand that turns out to be a ConstI32, it can either encode the operand as an immediate argument or 旋风加速器app官网.

                                  Say we are evaluating the i32.add discussed above. After the add, where does the result go? For the baseline compiler, the answer is always "in a register" via pushing a new 旋风加速官网下载ios entry on the value stack. The baseline compiler includes a stupid register allocator that spills the value stack to the machine stack if no register is available, updating value stack entries from e.g. RegisterI32 to MemI32. Note, a ConstI32 never needs to be spilled: its value can always be reloaded as an immediate.

                                  The end result is that the baseline compiler avoids lots of stack store and load code generation, which speeds up the compiler, and happens to make faster code as well.

                                  Note that there is one limitation, currently: control-flow joins can have multiple predecessors and can pass a value (in the current WebAssembly specification), so the allocation of that value needs to be agreed-upon by all predecessors. As in this code:

                                  (func $f (param $arg i32) (result i32)
                                    (block $b (result i32)
                                      (i32.const 0)
                                      (local.get $arg)
                                      (i32.eqz)
                                      (br_if $b) ;; return 0 from $b if $arg is zero
                                      (drop)
                                      (i32.const 1))) ;; otherwise return 1
                                  ;; result of block implicitly returned
                                  

                                  When the br_if branches to the block end, where should it put the result value? The baseline compiler effectively punts on this question and just puts it in a well-known register (e.g., $rax on x86-64). Results for block exits are the only place where WebAssembly has "phi" variables, and the baseline compiler allocates all integer phi variables to the same register. A hack, but there we are.

                                  no noodling!

                                  When I started to hack on the baseline compiler, I did a lot of code reading, and eventually came on code like this:

                                  void BaseCompiler::emitAddI32() {
                                    int32_t c;
                                    if (popConstI32(&c)) {
                                      RegI32 r = popI32();
                                      masm.add32(Imm32(c), r);
                                      pushI32(r);
                                    } else {
                                      RegI32 r, rs;
                                      pop2xI32(&r, &rs);
                                      masm.add32(rs, r);
                                      freeI32(rs);
                                      pushI32(r);
                                    }
                                  }
                                  

                                  I said to myself, this is silly, why are we only emitting the add-immediate code if the constant is on top of the stack? What if instead the constant was the deeper of the two operands, why do we then load the constant into a register? I asked on the chat channel if it would be OK if I improved codegen here and got a response I was not expecting: no noodling!

                                  The reason is, performance of baseline-compiled code essentially doesn't matter. Obviously let's not pessimize things but the reason there's a baseline compiler is to emit code quickly. If we start to add more code to the baseline compiler, the compiler itself will slow down.

                                  For that reason, changes are only accepted to the baseline compiler if they are necessary for some reason, or if they improve latency as measured using some real-world benchmark (time-to-first-frame on Doom 3, for example).

                                  This to me was a real eye-opener: a compiler optimized not for the quality of the code that it generates, but rather for how fast it can produce the code. I had seen this in action before but this example really brought it home to me.

                                  The focus on compiler throughput rather than compiled-code throughput makes it pretty gnarly to hack on the baseline compiler -- care has to be taken when adding new features not to significantly regress the old. It is much more like hacking on a production JavaScript parser than your traditional SSA-based compiler.

                                  that's a wrap!

                                  So that's the WebAssembly baseline compiler in SpiderMonkey / Firefox. Until the next time, happy hacking!

                                  state of the gnunion 2024

                                  9 February 2024 7:44 PM (gnu | free software | fsf)

                                  Greetings, GNU hackers! This blog post rounds up GNU happenings over 2024. My goal is to celebrate the software we produced over the last year and to help us plan a successful 2024.

                                  Over the past few months I have been discussing project health with a group of GNU maintainers and we were wondering how the project was doing. We had impressions, but little in the way of data. To that end I wrote some scripts to collect dates and versions for all releases made by GNU projects, as far back as data is available.

                                  In 2024, I count 243 releases, from 98 projects. Nice! Notably, on ftp.gnu.org we have the first stable releases from three projects:

                                  旋风加速器app官网
                                  GNU Guix is perhaps the most exciting project in GNU these days. It's a package manager! It's a distribution! It's a container construction tool! It's a package-manager-cum-distribution-cum-container-construction-tool! Hearty congratulations to Guix on their first stable release.
                                  GNU Shepherd
                                  The GNU Daemon Shepherd is a modern dependency-based init service, written in Guile Scheme, and used in Guix. When you install Guix as an operating system, it actually stages Scheme programs from the operating system definition into the Shepherd configuration. So cool!
                                  GNU Backgammon
                                  Version 1.06.002 is not GNU Backgammon's first stable release, but it is the earliest version which is available on 旋风加速器官网下载. Formerly hosted on the now-defunct 旋风加速器专业版官网, GNU Backgammon is a venerable foe, and uses neural networks since before they were cool. Welcome back, GNU Backgammon!

                                  The total release counts above are slightly above what Mike Gerwitz's scripts count in his "GNU Spotlight", posted on the FSF blog. This could be because in addition to files released on ftp.gnu.org, I also manually collected release dates for most packages that upload their software somewhere other than gnu.org. I don't count alpha.gnu.org releases, and there were a handful of packages for which I wasn't successful at retrieving their release dates. But as a first approximation, it's a relatively complete data set.

                                  I put my scripts in git repository if anyone is interested in playing with the data. Some raw CSV files are there as well.

                                  where we at?

                                  Hair toss, check my nails, baby how you GNUing? Hard to tell!

                                  To get us closer to an answer, I calculated the active package count per year. There can be other definitions, but my reading is that an active package is one that has had a stable release within the preceding 3 calendar years. So for 2024, for example, a GNU package is considered active if it had a stable release in 2017, 2018, or 2024. What I got was a graph that looks like this:

                                  What we see is nothing before 1991 -- surely pointing to lacunae in my data set -- then a more or less linear rise in active package count until 2002, some stuttering growth rising to a peak in 2014 at 208 active packages, and from there a steady decline down to 153 active packages in 2024.

                                  Of course, as a metric, active package count isn't precisely the same as project health; GNU ed is indeed the standard editor but it's not GCC. But we need to look for measurements that indirectly indicate project health and this is what I could come up with.

                                  【象棋旋风软件下载】象棋旋风六伕 13.10-ZOL软件下载:2021-9-12 · 象棋旋风软件下载 软件简介 象棋旋风是一款具有极高智能的中国象棋对弈软件,在中高端电脑上已完美超越人类特级大师 ... 雷神加速 器 线刷宝 最新更新 新增软件 1电子基盘麻将游戏 3.7.2.0 2台州 …

                                  旋风专业版ios

                                  What this graph indicates is that GNU had an uninterrupted growth phase from its beginning until 2006, with more projects being born than dying. Things are mixed until 2012 or so, and since then we see many more projects making their last release and above all, very few packages "being born".

                                  where we going?

                                  轻蜂加速器-好用的海外网络加速器【官方网站】:2021-5-29 · 适用于手机网络加速,pc网络加速,稳定加速snkrs、外服手游、海淘、海外高校官网 首页 轻蜂动态 使用帮助 安卓下载 扫一扫 下载到手机 iOS下载 扫一扫 下载到手机 Windows Mac 快速 稳定 多线路节点,无延迟更畅快,轻松访问网站 ...

                                  lessons learned from guile, the ancient & spry

                                  7 February 2024 11:38 AM (guile | gnu | fosdem | maintenance | change | minimalism)

                                  Greets, hackfolk!

                                  Like just about every year, last week I took the train up to Brussels for FOSDEM, the messy and wonderful carnival of free software and of those that make it. Mostly I go for the hallway track: to see old friends, catch up, scheme about future plans, and refill my hacker culture reserves.

                                  I usually try to see if I can get a talk or two in, and this year was no exception. First on my mind was the recent release of Guile 3. This was the culmination of a 10-year plan of work and so obviously there are some things to say! But at the same time, I wanted to reflect back a bit and look at the past with a bit of distance.

                                  So in the end, my one talk was two talks. Let's start with the first one. (I'm trying a new thing where I share my talks as blog posts. We'll see how this goes. I know the rendering can be a bit off relative to the slides, but hopefully it's good enough. If you prefer, you can just watch the video instead!)

                                  考拉加速器ios下载-红海pro加速器

                                  FOSDEM 2024, Brussels

                                  Andy Wingo | wingo@igalia.com

                                  旋风加速器官网下载 | @andywingo

                                  So yeah let's celebrate! I co-maintain the Guile implementation of Scheme. It's a programming language. Guile 3, in summary, is just Guile, but faster. We added a simple just-in-time compiler as well as a bunch of ahead-of-time optimizations. The result is that it runs faster -- sometimes by a lot!

                                  In the image above you can see Guile 3's performance on a number of microbenchmarks, relative to Guile 2.2, sorted by speedup. The baseline is 1.0x as fast. You can see that besides the first couple microbenchmarks where things are a bit inconclusive (click for full-size image), everything gets faster. Most are at least 2x as fast, and one benchmark is even 32x as fast. (Note the logarithmic scale on the Y axis.)

                                  坚果nuts加速器官网 - 好看123:2021-6-14 · 网站介绍:【独家优惠:买1年送3个月】坚果nuts加速器官网提供坚果nuts苹果IOS、坚果app安卓,PC,Mac,iOS,Android,Linux等坚果app下载地址服务。立即购买坚果nuts享受年付赠送3... 9.坚果加速器破解版nuts坚果加速器破解版永久免费app下载v501 点击前往

                                  mini-benchmark: eval

                                  (primitive-eval
                                   ’(let fib ((n 30))
                                      (if (< n 2)
                                          n
                                          (+ (fib (- n 1)) (fib (- n 2))))))
                                  

                                  Guile 1.8: 旋风加速官网下载ios written in C

                                  Guile 2.0+: primitive-eval in Scheme

                                  Taking a look at a more medium-sized benchmark, let's compute the 30th fibonacci number, but using the interpreter instead of compiling the procedure. In Guile 2.0 and up, the interpreter (primitive-eval) is implemented in Scheme, so it's a good test of an important small Scheme program.

                                  Before 2.0, though, primitive-eval was actually implemented in C. This had a number of disadvantages, notably that it prevented tail calls between interpreted and compiled code. When we switched to a Scheme implementation of primitive-eval, we knew we would have a performance hit, but we thought that we would gain it back eventually as the compiler got better.

                                  As you can see, it took a while before the compiler and run-time improved to the point that primitive-eval in Scheme reached the speed of its old hand-tuned C implementation, but for Guile 3, we finally got there. Note again the logarithmic scale on the Y axis.

                                  macro-benchmark: guix

                                  guix build libreoffice ghc-pandoc guix \
                                    –dry-run --derivation

                                  7% faster

                                  旋风加速器ios_Game234游戏网旋风加速器ios专题报道:2021-5-30 · 首页 > 棋牌 > 旋风加速器ios 相关资讯 旋风保镖手游电脑版下载官网 安卓iOS 模拟器下载地址 旋风腿小哥强势回归《双截龙4》登录PC主机双平台 旋风少女2大结局百草若白新婚之夜 旋风战机EX成就奖杯有哪些旋风战机EX全成就解锁条件汇总 ...

                                  10% faster

                                  Finally, taking a real-world benchmark, the Guix package manager is implemented entirely in Scheme. All ten thousand packages are defined in Scheme, the building scripts are in Scheme, the initial RAM disk is in Scheme -- you get the idea. Guile performance in Guix can have an important effect on user experience. As you can see, Guile 3 lowered elapsed time for some operations by around 10 percent or so. Of course there's a lot of I/O going on in addition to computation, so Guile running twice as fast will rarely make Guix run twice as fast (Amdahl's law and all that).

                                  spry /sprī/

                                  • adjective: active; lively

                                  So, when I was thinking about words that describe Guile, the word "spry" came to mind.

                                  spry /sprī/

                                  • adjective: (especially of an old person) active; lively

                                  But actually when I went to look up the meaning of "spry", Collins Dictionary says that it especially applies to the agèd. At first I was a bit offended, but I knew in my heart that the dictionary was right.

                                  考拉加速器ios下载-红海pro加速器

                                  FOSDEM 2024, Brussels

                                  Andy Wingo | 旋风加速器xf5app

                                  wingolog.org | @andywingo

                                  That leads me into my second talk.

                                  旋风加速器官网下载

                                  旋风ios

                                  2009: Go

                                  旋风加速器xf5app

                                  1995: Ruby

                                  1995: PHP

                                  1995: JavaScript

                                  1993: Guile (33 years before 3.0!)

                                  It's common for a new project to be lively, but Guile is definitely not new. People have been born, raised, and earned doctorates in programming languages in the time that Guile has been around.

                                  built from ancient parts

                                  1991: Python

                                  1990: Haskell

                                  1990: SCM

                                  1989: Bash

                                  1988: Tcl

                                  1988: SIOD

                                  Guile didn't appear out of nothing, though. It was hacked up from the pieces of another Scheme implementation called SCM, which itself was initially based on Scheme in One Defun (SIOD), back before the Berlin Wall fell.

                                  旋风ios

                                  1987: Perl

                                  1984: C++

                                  旋风加速官网

                                  1972: C

                                  1958: Lisp

                                  1958: Algol

                                  1954: Fortran

                                  1958: Lisp

                                  旋风加速器官网下载 (34 years ago!)

                                  But it goes back further! The Scheme language, of which Guile is an implementation, dates from 1975, before I was born; and you can, if you choose, trace the lines back to the lambda calculus, created in mid-30s as a notation for computation. I suppose at this point I should say mid-2030s, to disambiguate.

                                  The point is, Guile is old! Statistically, most software projects from olden times are now dead. How has Guile managed to survive and (sometimes) thrive? Surely there must be some lesson or other that can be learned here.

                                  旋风ios

                                  Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past.

                                  The tradition of all dead generations weighs like a nightmare on the brains of the living. [...]

                                  Eighteenth Brumaire of Louis Bonaparte, Marx, 1852

                                  I am no philospher of history, but I know that there are some ways of looking at the past that do not help me understand things. One is the arrow of enlightened progress, in which events exist in a causal chain, each producing the next. It doesn't help me understand the atmosphere, tensions, and possibilities inherent at any particular point. I find the "progress" theory of history to be an extreme form of selection bias.

                                  Much more helpful to me is the Hegelian notion of dialectics: that at an given point in time there are various tensions at work. In our field, an example could be memory safety versus systems programming. These tensions create an environment that favors actions that lead towards resolution of the tensions. It doesn't mean that there's only one way to resolve the tensions, and it's not an automatic process -- people still have to do things. But the tendency is to ratchet history forward to a new set of tensions.

                                  The history of a project, to me, is then a process of dialectic tensions and resolutions. If the project survives, as Guile has, then it should teach us something about the way this process works in practice.

                                  ancient & spry

                                  Languages evolve; how to remain minimal?

                                  Dialectic opposites

                                  • 旋风加速官网下载ios

                                  • 旋风加速器.apk

                                  • ...

                                  Lessons learned from inside Hegel’s motor of history

                                  One dialectic is the tension between the world's problems and what tools Guile offers to understand and solve them. In 1993, the web didn't really exist. In 2033, if Guile doesn't run well in a web browser, probably it will be dead. But this process operates very slowly, for an old project; Guile isn't built on CORBA or something ephemeral like that, so we don't have very much data here.

                                  腾讯网游加速器——绝地求生首选加速器【官方推荐】 - QQ:2021-6-9 · 腾讯官方出品的海外游戏网络加速工具。完美加速绝地求生、彩虹六号、GTA5、无限法则、战地等上百款海外游戏,有效解决游戏中出现的延迟、丢包、卡顿等问题。72小时超长免费试用,体验后购 …

                                  In the specific context of Guile, and for the audience of the FOSDEM minimal languages devroom, we should recognize that for a software project, age and minimalism don't necessarily go together. Software gets features over time and becomes bigger. What does it mean for a minimal language to evolve?

                                  hill-climbing is insufficient

                                  Ex: Guile 1.8; Extend vs Embed

                                  One key lesson that I have learned is that the strategy of making only incremental improvements is a recipe for death, in the long term. The natural result is that you reach what you perceive to be the most optimal state of your project. Any change can only make it worse, so you stop moving.

                                  荒野乱斗安卓版国际服在哪下?IOS国际服在哪下?_搜一搜 ...:今天 · 荒野乱斗国服在6月9日正式上线,热度非凡,不少小伙伴在体验完国服后还不满足,想要入坑国际服去和全世界玩家激情对战。我伔就为想要安装国际服却无从下手的玩家伔准备了详细的入坑攻略,解决荒野乱斗国际服在哪下的问题。搜一搜手游网为您提供最新,最全,热门手游攻略资讯!

                                  users stay unless pushed away

                                  旋风加速官网

                                  • Source (API)

                                  • Binary (ABI)

                                  • 旋风加速器ios下载二维码

                                  • CLI

                                  • ...

                                  Ex: Python 3; local-eval; R6RS syntax; set!, set-car!

                                  极迅加速器-更快更稳的网游手游免费加速器:2021-6-15 · 极迅加速器是一款免费的网络网游游戏加速器,支持上千款热门游戏加速器免费下载,包括吃鸡加速器,steam加速器,英雄联盟加速器,GTA5加速器等.新用户免费试用!有效解决玩家在网络游戏中遇到的延时过高,登录困难,容易掉线等问题,选免费游戏加速器,就选极迅网络游戏加速器.

                                  Inertia is good and bad. It does conflict with minimalism as a principle; if you were to design Scheme in 2024, you would not include mutable variables or even mutable pairs. But they are still with us because if we removed them, we'd break too many users.

                                  Users can even make you add back things that you had removed. In Guile 2.0, we removed the capability to evaluate an expression at run-time within the lexical environment of an expression, as we didn't know how to implement this outside an interpreter. It turns out this was so important to users that we had to add local-eval back to Guile, later in the 2.0 series. (Fortunately we were able to do it in a way that layered on lower-level facilities; this approach reconciled me to the solution.)

                                  旋风加速官网下载ios

                                  What users say: don’t change or remove existing behavior

                                  手游加速器 - QQ:Android/iOS扫码下载

                                  旋风加速官网

                                  • 果备份下载 - 果备份 IOS数据备份软件 1.0.54.1964 官方版 ...:今天 · 果备份 IOS数据备份软件 1.0.54.1964 官方版iPhone TBProAudio Bundle TB音频插件 2021.6.3 破解版音频处理 Aiseesoft AVCHD Video Converter AVCHD视频转换器 9.2.26 破解版视频转换 Aiseesoft MTS Converter MTS视频转换器 9.2.30 破解版视频转换 UEStudio 文本编辑器 20.00.0.36 官方版文字处理 云帮手 服务器运维管理工具 2.0.7.16 官方版 ...

                                  Ex: psyntax; BDW-GC mark & finalize; compile-time; Unicode / locales

                                  Unfortunately, the need to change means that sometimes you will lose users. It's either a dead project, or losing users.

                                  In Guile 1.8, for example, the macro expander ran lazily: it would only expand code the first time it ran it. This was good for start-up time, because not all code is evaluated in the course of a simple script. Lazy expansion allowed us to start doing important work sooner. However, this approach caused immense pain to people that wanted "proper" Scheme macros that preserved lexical scoping; the state of the art was to eagerly expand an entire file. So we switched, and at the same time added a notion of compile-time. This compromise kept good start-up time while allowing fancy macros.

                                  荒野乱斗安卓版国际服在哪下?IOS国际服在哪下?_搜一搜 ...:今天 · 荒野乱斗国服在6月9日正式上线,热度非凡,不少小伙伴在体验完国服后还不满足,想要入坑国际服去和全世界玩家激情对战。我伔就为想要安装国际服却无从下手的玩家伔准备了详细的入坑攻略,解决荒野乱斗国际服在哪下的问题。搜一搜手游网为您提供最新,最全,热门手游攻略资讯!

                                  every interface is a cost

                                  Guile binary ABI: libguile.so; compiled Scheme files

                                  Make compatibility easier: minimize interface

                                  Ex: scm_sym_unquote, GOOPS, Go, Guix

                                  雷神加速器最新版下载_雷神加速器app官网下载 安卓版v6.2 ...:2021-5-9 · 雷神加速器怎么对游戏加速_雷神NN加速器下载使用图文教程 雷神加速器是一款针对手机网络环境自动优化,一键游戏加速,可根据不同的网络情况优选加速方案,解决用户玩手机游戏过程中遇到的各类网络问题,实现高效降低延迟,让您远离丢包和网络延迟的困扰,是一款值得信赖的手机游戏加速器。

                                  You always have some interfaces, though. For example Guix can't change its command-line interface from one day to the next, for example, because users would complain. But it's been surprising to me the extent to which Guile has interfaces that I didn't consider. Recently for example in the 3.0 release, we unexported some symbols by mistake. Users complained, so we're putting them back in now.

                                  parallel installs for the win

                                  Highly effective pattern for change

                                  • libguile-2.0.so

                                  • libguile-3.0.so

                                  http://ometer.com/parallel.html

                                  Changed ABI is new ABI; it should have a new name

                                  Ex: make-struct/no-tail, GUILE_PKG([2.2]), libtool

                                  So how does one do incompatible change? If "don't" isn't a sufficient answer, then parallel installs is a good strategy. For example in Guile, users don't have to upgrade to 3.0 until they are ready. Guile 2.2 happily installs in parallel with Guile 3.0.

                                  As another small example, there's a function in Guile called 旋风加速器专业版官网 (old doc link), whose first argument is the number of "tail" slots, followed by initializers for all slots (normal and "tail"). This tail feature is weird and I would like to remove it. Unfortunately I can't just remove the argument, so I had to make a new function, make-struct/no-tail, which exists in parallel with the old version that I can't break.

                                  deprecation facilitates migration

                                  旋风加速器ios_Game234游戏网旋风加速器ios专题报道:2021-5-30 · 首页 > 棋牌 > 旋风加速器ios 相关资讯 旋风保镖手游电脑版下载官网 安卓iOS 模拟器下载地址 旋风腿小哥强势回归《双截龙4》登录PC主机双平台 旋风少女2大结局百草若白新婚之夜 旋风战机EX成就奖杯有哪些旋风战机EX全成就解锁条件汇总 ...
                                  轻蜂加速器-好用的海外网络加速器【官方网站】:2021-5-29 · 适用于手机网络加速,pc网络加速,稳定加速snkrs、外服手游、海淘、海外高校官网 首页 轻蜂动态 使用帮助 安卓下载 扫一扫 下载到手机 iOS下载 扫一扫 下载到手机 Windows Mac 快速 稳定 多线路节点,无延迟更畅快,轻松访问网站 ...
                                  scm_c_issue_deprecation_warning
                                    ("Arbiters are deprecated.  "
                                     "Use mutexes or atomic variables instead.");

                                  begin-deprecated, SCM_ENABLE_DEPRECATED

                                  Fortunately there is a way to encourage users to migrate from old interfaces to new ones: deprecation. In Guile this applies to all of our interfaces (binary, source, etc). If a feature is marked as deprecated, we cause its use to issue a warning, ideally at compile-time when users responsible for the package can fix it. You can even add __attribute__((__deprecated__)) on C types!

                                  旋风专业版ios

                                  Replace, Deprecate, Remove

                                  All change is possible; question is only length of deprecation period

                                  Applies to all interfaces

                                  Guile deprecation period generally one stable series

                                  Ex: scm_t_uint8; make-struct; Foreign objects; uniform vectors

                                  Finally, you end up in a situation where you have replaced the old interface and issued deprecation warnings to help users migrate. The next step is to remove the old interface. If you don't do this, you are failing as a project maintainer -- your project becomes literally unmaintainable as it just grows and grows.

                                  This strategy applies to all changes. The deprecation period may last a while, and it may be that the replacement you built doesn't serve the purpose. There is still a dialog with the users that needs to happen. As an example, I made a replacement for the "SMOB" facility in Guile that allows users to define new types, backed by C interfaces. This new 旋风加速器ios下载二维码 facility might not actually be good enough to replace SMOBs; since I haven't formally deprecatd SMOBs, I don't know yet because users are still using the old thing!

                                  change produces a new stable point

                                  Stability within series: only additions

                                  Corollary: dependencies must be at least as stable as you!

                                  • for your definition of stable

                                  • social norms help (GNU, semver)

                                  Ex: libtool; unistring; gnulib

                                  In my experience, the old management dictum that "the only constant is change" does not describe software. Guile changes, then it becomes stable for a while. You need an unstable series escape hill-climbing, then once you found your new hill, you start climbing again in the stable series.

                                  Once you reach your stable point, the projects you rely on need to exhibit the same degree of stability that you envision for your project. You can't build a web site that you expect to maintain for 10 years on technology that fundamentally changes every 6 months. But stable dependencies isn't something you can ensure technically; rather it relies on social norms of who makes the software you use.

                                  who can crank the motor of history?

                                  All libraries define languages

                                  Allow user to evolve the language

                                  • 海外如何充值旋风加速器手游ios苹果版350元 APP ITUNES充值:海外如何充值旋风加速器手游ios苹果版350元 APP ITUNES充值。美国、加拿大、澳洲、新西兰、日本、英国 如何充值旋风加速器手游ios苹果版350元 APP ITUNES充值。

                                  • User syntax: macros (yay Scheme)

                                  Guile 1.8 perf created tension

                                  • 旋风加速器xf5app

                                  • large C interface “for speed”

                                  果备份下载 - 果备份 IOS数据备份软件 1.0.54.1964 官方版 ...:今天 · 果备份 IOS数据备份软件 1.0.54.1964 官方版iPhone TBProAudio Bundle TB音频插件 2021.6.3 破解版音频处理 Aiseesoft AVCHD Video Converter AVCHD视频转换器 9.2.26 破解版视频转换 Aiseesoft MTS Converter MTS视频转换器 9.2.30 破解版视频转换 UEStudio 文本编辑器 20.00.0.36 官方版文字处理 云帮手 服务器运维管理工具 2.0.7.16 官方版 ...

                                  旋风加速器app最新版下载_旋风加速器最新版免费下载_求知 ...:2021-5-25 · 旋风加速器官网版优势: 1、免费加速器 真正的免费加速器,确保社交软件和网页顺畅访问 2、快速稳定 500+服务器节点,智能分流,不限流量,畅享极速线路 3、支持各种协议 TCP、UDP, L2TP-IPsec, SSTP, and PPTP 软件特点 1、免费加速器 真正的免费加速

                                  A dialectic process does not progress on its own: it requires actions. As a project maintainer, some of my actions are because I want to do them. Others are because users want me to do them. The user-driven actions are generally a burden and as a lazy maintainer, I want to minimize them.

                                  Here I think Guile has to a large degree escaped some of the pressures that weigh on other languages, for example Python. Because Scheme allows users to define language features that exist on par with "built-in" features, users don't need my approval or intervention to add (say) new syntax to the language they work in. Furthermore, their work can still compose with the work of others, even if the others don't buy in to their language extensions.

                                  Still, Guile 1.8 did have a dynamic whereby the relatively poor performance of having to run all code through primitive-eval meant that users were pushed towards writing extensions in C. This in turn pushed Guile to expose all of its guts for access from C, which obviously has led to an overbloated C API and ABI. Happily the work on the Scheme compiler has mostly relieved this pressure, and we may therefore be able to trim the size of the C API and ABI over time.

                                  contributions and risk

                                  From maintenance point of view, all interface is legacy

                                  Guile: Sometimes OK to accept user modules when they are more stable than Guile

                                  In-tree users keep you honest

                                  Ex: SSAX, fibers, SRFI

                                  It can be a good strategy to "sediment" solutions to common use cases into Guile itself. This can improve the minimalism of an entire ecosystem of code. The maintenance burden has to be minimal, however; Guile has sometimes adopted experimental code into its repository, and without active maintenance, it soon becomes stale relative to what users and the module maintainers expect.

                                  I would note an interesting effect: pieces of code that were adopted into Guile become a snapshot of the coding style at that time. It's useful to have some in-tree users because it gives you a better idea about how a project is seen from the outside, from a code perspective.

                                  sticky bits

                                  Memory management is an ongoing thorn

                                  Local maximum: Boehm-Demers-Weiser conservative collector

                                  How to get to precise, generational GC?

                                  Not just Guile; e.g. CPython __del__

                                  There are some points that resist change. The stickiest of these is the representation of heap-allocated Scheme objects in C. Guile currently uses a garbage collector that "automatically" finds all live Scheme values on the C stack and in registers. It was the right choice at the time, given our maintenance budget. But to get the next bump in performance, we need to switch to a generational garbage collector. It's hard to do that without a lot of pain to C users, essentially because the C language is too weak to express the patterns that we would need. I don't know how to proceed.

                                  I would note, though, that memory management is a kind of cross-cutting interface, and that it's not just Guile that's having problems changing; I understand PyPy has had a lot of problems regarding changes on when Python destructors get called due to its switch from reference counting to a proper GC.

                                  旋风加速官网

                                  We are here: stability

                                  And then?

                                  • Parallel-installability for source languages: #lang

                                  • Sediment idioms from Racket to evolve Guile user base

                                  Remove myself from “holding the crank”

                                  So where are we going? Nowhere, for the moment; or rather, up the hill. We just released Guile 3.0, so let's just appreciate that for the time being.

                                  But as far as next steps in language evolution, I think in the short term they are essentially to further enable change while further sedimenting good practices into Guile. On the change side, we need parallel installability for entire languages. Racket did a great job facilitating this with #lang and we should just adopt that.

                                  As for sedimentation, we should step back and if any common Guile use patterns built by our users should be include core Guile, and widen our gaze to Racket also. It will take some effort both on a technical perspective but also on a social/emotional consensus about how much change is good and how bold versus conservative to be: putting the dialog into dialectic.

                                  dialectic, boogie woogie woogie

                                  http://gnu.org/s/guile

                                  http://wingolog.org/

                                  #guile on freenode

                                  @andywingo

                                  wingo@igalia.com

                                  Happy hacking!

                                  Hey that was the talk! Hope you enjoyed the writeup. Again, video and slides available on the FOSDEM web site. Happy hacking!

                                  旋风ios

                                  8 October 2024 3:34 PM (rms | gnu)

                                  Yesterday, a collective of GNU maintainers publicly posted a statement advocating collective decision-making in the GNU project. I would like to expand on what that statement means to me and why I signed on.

                                  For many years now, I have not considered Richard Stallman (RMS) to be the head of the GNU project. Yes, he created GNU, speaking it into existence via prophetic narrative and via code; yes, he inspired many people, myself included, to make the vision of a GNU system into a reality; and yes, he should be recognized for these things. But accomplishing difficult and important tasks for GNU in the past does not grant RMS perpetual sovereignty over GNU in the future.

                                  ontological considerations

                                  More on the motivations for the non serviam in a minute. But first, a meta-point: the GNU project does not exist, at least not in the sense that many people think it does. It is not a legal entity. It is not a charity. You cannot give money to the GNU project. Besides the manifesto, GNU has no by-laws or constitution or founding document.

                                  One could describe GNU as a set of software packages that have been designated by RMS as forming part, in some way, of GNU. But this artifact-centered description does not capture movement: software does not, by itself, change the world; it lacks agency. It is the people that maintain, grow, adapt, and build the software that are the heart of the GNU project -- the maintainers of and contributors to the GNU packages. They are the GNU of whom I speak and of whom I form a part.

                                  wasted youth

                                  Richard Stallman describes himself as the leader of the GNU project -- the "chief GNUisance", he calls it -- but this position only exists in any real sense by consent of the people that make GNU. So what is he doing with this role? Does he deserve it? Should we consent?

                                  To me it has been clear for many years that to a first approximation, the answer is that RMS does nothing for GNU. RMS does not write software. He does not design software, or systems. He does hold a role of accepting new projects into GNU; there, his primary criteria is not "does this make a better GNU system"; it is, rather, "does the new project meet the minimum requirements".

                                  By itself, this seems to me to be a failure of leadership for a software project like GNU. But unfortunately when RMS's role in GNU isn't neglect, more often as not it's negative. RMS's interventions are generally conservative -- to assert authority over the workings of the GNU project, to preserve ways of operating that he sees as important. See for example the whole glibc abortion joke debacle as an example of how RMS acts, when he chooses to do so.

                                  Which, fair enough, right? I can hear you saying it. RMS started GNU so RMS decides what it is and what it can be. But I don't accept that. GNU is about practical software freedom, not about RMS. GNU has long outgrown any individual contributor. I don't think RMS has the legitimacy to tell this group of largely volunteers what we should build or how we should organize ourselves. Or rather, he can say what he thinks, but he has no dominion over GNU; he does not have majority sweat equity in the project. If RMS actually wants the project to outlive him -- something that by his actions is not clear -- the best thing that he could do for GNU is to stop pretending to run things, to instead declare victory and retire to an emeritus role.

                                  Note, however, that my personal perspective here is not a consensus position of the GNU project. There are many (most?) GNU developers that still consider RMS to be GNU's rightful leader. I think they are mistaken, but I do not repudiate them for this reason; we can work together while differing on this and other matters. I simply state that I, personally, do not serve RMS.

                                  selective attrition

                                  Though the "voluntary servitude" questions are at the heart of the recent joint statement, I think we all recognize that attempts at self-organization in GNU face a grave difficulty, even if RMS decided to retire tomorrow, in the way that GNU maintainers have selected themselves.

                                  The great tragedy of RMS's tenure in the supposedly universalist FSF and GNU projects is that he behaves in a way that is particularly alienating 旋风加速官网下载ios. It doesn't take a genius to conclude that if you're personally driving away potential collaborators, that's a bad thing for the organization, and actively harmful to the organization's goals: software freedom is a cause that is explicitly for everyone.

                                  We already know that software development in people's free time skews towards privilege: not everyone has the ability to devote many hours per week to what is for many people a hobby, and it follows of course that those that have more privilege in society will be more able to establish a position in the movement. And then on top of these limitations on contributors coming in, we additionally have this negative effect of a toxic culture pushing people out.

                                  The result, sadly, is that a significant proportion of those that have stuck with GNU don't see any problems with RMS. The cause of software freedom has always run against the grain of capitalism so GNU people are used to being a bit contrarian, but it has also had the unfortunate effect of creating a cult of personality and a with-us-or-against-us mentality. For some, only a traitor would criticise the GNU project. It's laughable but it's a thing; I prefer to ignore these perspectives.

                                  Finally, it must be said that there are a few GNU people for whom it's important to check if the microphone is on before making a joke about rape culture. (Incidentally, RMS had nothing to say on that issue; how useless.)

                                  So I honestly am not sure if GNU as a whole effectively has the 旋风加速器ios下载二维码 to make good decisions. Neglect and selective attrition have gravely weakened the project. But I stand by the principles and practice of software freedom, and by my fellow GNU maintainers who are unwilling to accept the status quo, and I consider attempts to reduce GNU to founder-loyalty to be mistaken and without legitimacy.

                                  where we're at

                                  火箭加速器官方下载 - 亚风软件站:2021-10-21 · 火箭加速器电脑版是手机加速软件“火箭加速器app”的桌面pc版本,由安卓模拟器而生成,简单的一键直连,任何小白用户都能快速使用,加上支持多个线路自由切换,众及通过端到端的加密链路,可众有效保证数据传输过程中的安全,是用户最好的加速软件,欢迎免费下载。

                                  In the meantime, as always, happy hacking, and: no gods! No masters! No chief!!!

                                  fibs, lies, and benchmarks

                                  26 June 2024 10:34 AM (guile | jit | v8 | webassembly | ocaml | optimization | scheme | chez | racket)

                                  Friends, consider the recursive Fibonacci function, expressed most lovelily in Haskell:

                                  fib 0 = 0
                                  fib 1 = 1
                                  fib n = fib (n-1) + fib (n-2)
                                  

                                  Computing elements of the Fibonacci sequence ("Fibonacci numbers") is a common microbenchmark. Microbenchmarks are like a Suzuki exercises for learning violin: not written to be good tunes (good programs), but rather to help you improve a skill.

                                  The fib microbenchmark teaches language implementors to improve recursive function call performance.

                                  I'm writing this article because after adding native code generation to Guile, I wanted to check how Guile was doing relative to other language implementations. The results are mixed. We can start with the most favorable of the comparisons: Guile present versus Guile of the past.


                                  I collected these numbers on my i7-7500U CPU @ 2.70GHz 2-core laptop, with no particular performance tuning, running each benchmark 10 times, waiting 2 seconds between measurements. The bar value indicates the median elapsed time, and above each bar is an overlayed histogram of all results for that scenario. Note that the y axis is on a log scale. The 2.9.3* version corresponds to unreleased Guile from git.

                                  Good news: Guile has been getting significantly faster over time! Over decades, true, but I'm pleased.

                                  where are we? static edition

                                  旋风加速器ios下载_Game234游戏网旋风加速器ios下载专题报道:2021-6-11 · 首页 > 棋牌 > 旋风加速器ios下载 相关资讯 旋风保镖手游电脑版下载官网 安卓iOS 模拟器下载地址 旋风少女2大结局百草若白新婚之夜 旋风腿小哥强势回归《双截龙4》登录PC主机双平台 旋风战机EX成就奖杯有哪些旋风战机EX全成就解锁条件汇总 ...

                                  First up would be the industrial C compilers, GCC and LLVM. We can throw in a few more "static" language implementations as well: compilers that completely translate to machine code ahead-of-time, with no type feedback, and a minimal run-time.


                                  Here we see that GCC is doing best on this benchmark, completing in an impressive 0.304 seconds. It's interesting that the result differs so much from clang. I had a look at the disassembly for GCC and I see:

                                  fib:
                                      push   %r12
                                      mov    %rdi,%rax
                                      push   %rbp
                                      mov    %rdi,%rbp
                                      push   %rbx
                                      cmp    $0x1,%rdi
                                      jle    finish
                                      mov    %rdi,%rbx
                                      xor    %r12d,%r12d
                                  again:
                                      lea    -0x1(%rbx),%rdi
                                      sub    $0x2,%rbx
                                      callq  fib
                                      add    %rax,%r12
                                      cmp    $0x1,%rbx
                                      jg     again
                                      and    $0x1,%ebp
                                      lea    0x0(%rbp,%r12,1),%rax
                                  finish:
                                      pop    %rbx
                                      pop    %rbp
                                      pop    %r12
                                      retq   
                                  

                                  It's not quite straightforward; what's the loop there for? It turns out that GCC inlines one of the recursive calls to fib. The microbenchmark is no longer measuring call performance, because GCC managed to reduce the number of calls. If I had to guess, I would say this optimization doesn't have a wide applicability and is just to game benchmarks. In that case, well played, GCC, well played.

                                  LLVM's compiler (clang) looks more like what we'd expect:

                                  fib:
                                     push   %r14
                                     push   %rbx
                                     push   %rax
                                     mov    %rdi,%rbx
                                     cmp    $0x2,%rdi
                                     jge    recurse
                                     mov    %rbx,%rax
                                     add    $0x8,%rsp
                                     pop    %rbx
                                     pop    %r14
                                     retq   
                                  recurse:
                                     lea    -0x1(%rbx),%rdi
                                     callq  fib
                                     mov    %rax,%r14
                                     add    $0xfffffffffffffffe,%rbx
                                     mov    %rbx,%rdi
                                     callq  fib
                                     add    %r14,%rax
                                     add    $0x8,%rsp
                                     pop    %rbx
                                     pop    %r14
                                     retq   
                                  

                                  I bolded the two recursive calls.

                                  Incidentally, the fib as implemented by GCC and LLVM isn't quite the same program as Guile's version. If the result gets too big, GCC and LLVM will overflow, whereas in Guile we overflow into a bignum. Also in C, it's possible to "smash the stack" if you recurse too much; compilers and run-times attempt to mitigate this danger but it's not completely gone. In Guile you can recurse however much you want. Finally in Guile you can interrupt the process if you like; the compiled code is instrumented with safe-points that can be used to run profiling hooks, debugging, and so on. Needless to say, this is not part of C's mission.

                                  Some of these additional features can be implemented with no significant performance cost (e.g., via guard pages). But it's fair to expect that they have some amount of overhead. More on that later.

                                  The other compilers are OCaml's ocamlopt, coming in with a very respectable result; Go, also doing well; and V8 WebAssembly via Node. As you know, you can compile C to WebAssembly, and then V8 will compile that to machine code. In practice it's just as static as any other compiler, but the generated assembly is a bit more involved:

                                  旋风加速器ios|qq旋风官方正式版下载_234游戏网:2021-4-3 · qq旋风下载 版本号:v4.8.773 【qq旋风软件简介】 QQ旋风是腾讯公司08年底推出的新一伕互联网下载工具,下载速度更快,占用内存更少,界面更清爽简单。QQ旋风2创新性的改变下载模式,将浏览资源和下载资源融为整体,让下载更简单,更纯粹,更小巧。

                                  Apparently fib compiles to a function of two arguments, the first passed in rsi, and the second in rax. (V8 uses a custom calling convention for its compiled WebAssembly.) The first synthesized argument is a handle onto run-time data structures for the current thread or isolate, and in the function prelude there's a check to see that the function has enough stack. V8 uses these stack checks also to handle interrupts, for when a web page is stuck in JavaScript.

                                  Otherwise, it's a more or less normal function, with a bit more register/stack traffic than would be strictly needed, but pretty good.

                                  do optimizations matter?

                                  You've heard of Moore's Law -- though it doesn't apply any more, it roughly translated into hardware doubling in speed every 18 months. (Yes, I know it wasn't precisely that.) There is a corresponding rule of thumb for compiler land, 旋风加速官网: compiler optimizations make software twice as fast every 18 旋风加速器app官网. Zow!

                                  The previous results with GCC and LLVM were with optimizations enabled (-O3). One way to measure Proebsting's Law would be to compare the results with -O0. Obviously in this case the program is small and we aren't expecting much work out of the optimizer, but it's interesting to see anyway:


                                  Answer: optimizations don't matter much for this benchark. This investigation does give a good baseline for compilers from high-level languages, like Guile: in the absence of clever trickery like the recursive inlining thing GCC does and in the absence of industrial-strength instruction selection, what's a good baseline target for a compiler? Here we see for this benchmark that it's somewhere between 420 and 620 milliseconds or so. Go gets there, and OCaml does even better.

                                  how is time being spent, anyway?

                                  Might we expect V8/WebAssembly to get there soon enough, or is the stack check that costly? How much time does one stack check take anyway? For that we'd have to determine the number of recursive calls for a given invocation.

                                  Friends, it's not entirely clear to me why this is, but I instrumented a copy of fib, and I found that the number of calls in fib(n) was a more or less constant factor of the result of calling fib. That ratio converges to twice the golden ratio, which means that since 旋风加速器xf5app, then the number of calls in fib(n) is approximately 旋风加速器.apk. I scratched my head for a bit as to why this is and I gave up; the Lord works in mysterious ways.

                                  Anyway for fib(40), that means that there are around 3.31e8 calls, absent GCC shenanigans. So that would indicate that each call for clang takes around 1.27 ns, which at turbo-boost speeds on this machine is 4.44 cycles. At maximum throughput (4 IPC), that would indicate 17.8 instructions per call, and indeed on the 旋风加速官网 path I count 17 instructions.

                                  For WebAssembly I calculate 2.25 nanoseconds per call, or 7.9 cycles, or 31.5 (fused) instructions at max IPC. And indeed counting the extra jumps in the trampoline, I get 33 cycles on the recursive path. I count 4 instructions for the stack check itself, one to save the current isolate, and two to shuffle the current isolate into place for the recursive calls. But, compared to clang, V8 puts 6 words on the stack per call, as opposed to only 4 for LLVM. I think with better interprocedural register allocation for the isolate (i.e.: reserve a register for it), V8 could get a nice boost for call-heavy workloads.

                                  where are we? dynamic edition

                                  Guile doesn't aim to replace C; it's different. It has garbage collection, an integrated debugger, and a compiler that's available at run-time, it is dynamically typed. It's perhaps more fair to compare to languages that have some of these characteristics, so I ran these tests on versions of recursive fib written in a number of languages. Note that all of the numbers in this post include start-up time.


                                  Here, the ocamlc line is the same as before, but using the bytecode compiler instead of the native compiler. It's a bit of an odd thing to include but it performs so well I just had to include it.

                                  I think the real takeaway here is that Chez Scheme has fantastic performance. I have not been able to see the disassembly -- does it do the trick like GCC does? -- but the numbers are great, and I can see why Racket decided to rebase its implementation on top of it.

                                  Interestingly, as far as I understand, Chez implements stack checks in the straightfoward way (an inline test-and-branch), not with a guard page, and instead of using the stack check as a generic ability to interrupt a computation in a timely manner as V8 does, Chez emits a separate interrupt check. I would like to be able to see Chez's disassembly but haven't gotten around to figuring out how yet.

                                  Since I originally published this article, I added a LuaJIT entry as well. As you can see, LuaJIT performs as well as Chez in this benchmark.

                                  Haskell's call performance is surprisingly bad here, beaten even by OCaml's bytecode compiler; is this the cost of laziness, or just a lacuna of the implementation? I do not know. I do know I have this mental image that Haskell is a good compiler but apparently if that's the standard, so is Guile :)

                                  Finally, in this comparison section, I was not surprised by cpython's relatively poor performance; we know cpython is not fast. I think though that it just goes to show how little these microbenchmarks are worth when it comes to user experience; like many of you I use plenty of Python programs in my daily work and don't find them slow at all. Think of micro-benchmarks like x-ray diffraction; they can reveal the hidden substructure of DNA but they say nothing at all about the organism.

                                  where to now?

                                  海外如何充值旋风加速器手游ios苹果版350元 APP ITUNES充值:海外如何充值旋风加速器手游ios苹果版350元 APP ITUNES充值。美国、加拿大、澳洲、新西兰、日本、英国 如何充值旋风加速器手游ios苹果版350元 APP ITUNES充值。

                                  (define (fib n)
                                    (if (< n 2)
                                        n
                                        (+ (fib (- n 1)) (fib (- n 2)))))
                                  

                                  They were running this, instead:

                                  (define (fib n)
                                    (define (fib* n)
                                      (if (< n 2)
                                          n
                                          (+ (fib* (- n 1)) (fib* (- n 2)))))
                                    (fib* n))
                                  

                                  The thing is, historically, Scheme programs have treated top-level definitions as being mutable. This is because you don't know the extent of the top-level scope -- there could always be someone else who comes and adds a new definition of fib, effectively mutating the existing definition in place.

                                  This practice has its uses. It's useful to be able to go in to a long-running system and change a definition to fix a bug or add a feature. It's also a useful way of developing programs, to incrementally build the program bit by bit.

                                  旋风加速器官网下载

                                  But, I would say that as someone who as written and maintained a lot of Scheme code, it's not a normal occurence to mutate a top-level binding on purpose, and it has a significant performance impact. If the compiler knows the target to a call, that unlocks a number of important optimizations: type check elision on the callee, more optimal closure representation, smaller stack frames, possible contification (turning calls into jumps), argument and return value count elision, representation specialization, and so on.

                                  This overhead is especially egregious for calls inside modules. Scheme-the-language only gained modules relatively recently -- relative to the history of scheme -- and one of the aspects of modules is precisely to allow reasoning about top-level module-level bindings. This is why running Chez Scheme with the --program option is generally faster than 旋风ios (which I used for all of these tests): it opts in to the "newer" specification of what a top-level binding is.

                                  In Guile we would probably like to move towards a more static way of treating top-level bindings, at least those within a single compilation unit. But we haven't done so yet. It's probably the most important single optimization we can make over the near term, though.

                                  As an aside, it seems that LuaJIT also shows a similar performance differential for local function fib(n) versus just plain function fib(n).

                                  It's true though that even absent lexical optimizations, top-level calls can be made more efficient in Guile. I am not sure if we can reach Chez with the current setup of having a template JIT, because we need two return addresses: one virtual (for bytecode) and one "native" (for JIT code). Register allocation is also something to improve but it turns out to not be so important for fib, as there are few live values and they need to spill for the recursive call. But, we can avoid some of the indirection on the call, probably using an inline cache associated with the callee; Chez has had this optimization since 1984!

                                  what guile learned from fib

                                  QQ旋风_官方电脑版_华军纯净下载:2021-8-12 · qq旋风(超级旋风)是一款功能全面的下载工具。qq旋风(超级旋风)全面支持电驴、HTTP、BT、FTP等下载方式。qq旋风(超级旋风)使得试用加速的限制时间变成了无限循环,效果与迅雷加速相同。下载速度更快,占用内存更少,界面更清爽简单。

                                  To decide what improvements to make, I extracted the assembly that Guile generated for fib to a standalone file, and tweaked it in a number of ways to determine what the potential impact of different scenarios was. Some of the detritus from this investigation is here.

                                  海外如何充值旋风加速器手游ios苹果版350元 APP ITUNES充值:海外如何充值旋风加速器手游ios苹果版350元 APP ITUNES充值。美国、加拿大、澳洲、新西兰、日本、英国 如何充值旋风加速器手游ios苹果版350元 APP ITUNES充值。

                                  Another thing that became clear from this investigation was that our stack frames were too large; there was too much memory traffic. I was able to improve this in the lexical-call by adding an optimization to elide useless closure bindings. Usually in Guile when you call a procedure, you pass the callee as the 0th parameter, then the arguments. This is so the procedure has access to its closure. For some "well-known" procedures -- procedures whose callers can be enumerated -- we optimize to pass a specialized representation of the closure instead ("closure optimization"). But for well-known procedures with no free variables, there's no closure, so we were just passing a throwaway value (#f). An unhappy combination of Guile's current calling convention being stack-based and a strange outcome from the slot allocator meant that frames were a couple words too big. Changing to allow a custom calling convention in this case sped up fib considerably.

                                  Finally, and also significantly, Guile's JIT code generation used to manually handle calls and returns via manual stack management and indirect jumps, instead of using the platform calling convention and the C stack. This is to allow unlimited stack growth. However, it turns out that the indirect jumps at return sites were stalling the pipeline. Instead we switched to use call/return but keep our manual stack management; this allows the CPU to use its return address stack to predict return targets, speeding up code.

                                  et voilà

                                  Well, long article! Thanks for reading. There's more to do but I need to hit the publish button and pop this off my stack. Until next time, happy hacking!

                                  pictie, my c++-to-webassembly workbench

                                  3 June 2024 10:10 AM (igalia | c++ | 旋风加速器官网下载 | emscripten | finalizers | weak-refs | javascript | webidl | 旋风专业版ios | sicp)

                                  Hello, interwebs! Today I'd like to share a little skunkworks project with y'all: Pictie, a workbench for WebAssembly C++ integration on the web.

                                  loading pictie...

                                  wtf just happened????!?

                                  So! If everything went well, above you have some colors and a prompt that accepts Javascript expressions to evaluate. If the result of evaluating a JS expression is a painter, we paint it onto a canvas.

                                  But allow me to back up a bit. These days everyone is talking about WebAssembly, and I think with good reason: just as many of the world's programs run on JavaScript today, tomorrow much of it will also be in languages compiled to WebAssembly. JavaScript isn't going anywhere, of course; it's around for the long term. It's the "also" aspect of WebAssembly that's interesting, that it appears to be a computing substrate that is compatible with JS and which can extend the range of the kinds of programs that can be written for the web.

                                  And yet, it's early days. What are programs of the future going to look like? What elements of the web platform will be needed when we have systems composed of WebAssembly components combined with JavaScript components, combined with the browser? Is it all going to work? Are there missing pieces? What's the status of the toolchain? What's the developer experience? What's the user experience?

                                  When you look at the current set of applications targetting WebAssembly in the browser, mostly it's games. While compelling, games don't provide a whole lot of insight into the shape of the future web platform, inasmuch as there doesn't have to be much JavaScript interaction when you have an already-working C++ game compiled to WebAssembly. (Indeed, much of the incidental interactions with JS that are currently necessary -- bouncing through JS in order to call WebGL -- people are actively working on removing all of that overhead, so that WebAssembly can call platform facilities (WebGL, etc) directly. But I digress!)

                                  For WebAssembly to really succeed in the browser, there should also be incremental stories -- what does it look like when you start to add WebAssembly modules to a system that is currently written mostly in JavaScript?

                                  坚果nuts加速器官网 - 好看123:2021-6-14 · 网站介绍:【独家优惠:买1年送3个月】坚果nuts加速器官网提供坚果nuts苹果IOS、坚果app安卓,PC,Mac,iOS,Android,Linux等坚果app下载地址服务。立即购买坚果nuts享受年付赠送3... 9.坚果加速器破解版nuts坚果加速器破解版永久免费app下载v501 点击前往

                                  pictie is a test bed

                                  Pictie is a simple, standalone C++ graphics package implementing an algebra of painters. It was created not to be a great graphics package but rather to be a test-bed for compiling C++ libraries to WebAssembly. You can read more about it on its github page.

                                  Structurally, pictie is a modern C++ library with a functional-style interface, smart pointers, reference types, lambdas, and all the rest. We use emscripten to compile it to WebAssembly; you can see more information on how that's done in the repository, or check the README.

                                  Pictie is inspired by Peter Henderson's "Functional Geometry" (1982, 2002). "Functional Geometry" inspired the Picture language from the well-known 雷神加速器最新版下载_雷神加速器app官网下载 安卓版v6.2 ...:2021-5-9 · 雷神加速器怎么对游戏加速_雷神NN加速器下载使用图文教程 雷神加速器是一款针对手机网络环境自动优化,一键游戏加速,可根据不同的网络情况优选加速方案,解决用户玩手机游戏过程中遇到的各类网络问题,实现高效降低延迟,让您远离丢包和网络延迟的困扰,是一款值得信赖的手机游戏加速器。 computer science textbook.

                                  prototype in action

                                  So far it's been surprising how much stuff just works. There's still lots to do, but just getting a C++ library on the web is pretty easy! I advise you to take a look to see the details.

                                  If you are thinking of dipping your toe into the WebAssembly water, maybe take a look also at Pictie when you're doing your back-of-the-envelope calculations. You can use it or a prototype like it to determine the effects of different compilation options on compile time, load time, throughput, and network trafic. You can check if the different binding strategies are appropriate for your C++ idioms; Pictie currently uses embind (source), but I would like to compare to WebIDL as well. You might also use it if you're considering what shape your C++ library should have to have a minimal overhead in a WebAssembly context.

                                  I use Pictie as a test-bed when working on the web platform; the weakref proposal which adds finalization, leak detection, and working on the binding layers around Emscripten. Eventually I'll be able to use it in other contexts as well, with the 旋风加速器.apk proposal, typed objects, and GC.

                                  旋风加速官网

                                  As the browser and adjacent environments have come to dominate programming in practice, we lost a bit of the delightful variety from computing. JS is a great language, but it shouldn't be the only medium for programs. WebAssembly is part of this future world, waiting in potentia, where applications for the web can be written in any of a number of languages. But, this future world will only arrive if it "works" -- if all of the various pieces, from standards to browsers to toolchains to virtual machines, only if all of these pieces fit together in some kind of sensible way. Now is the early phase of annealing, when the platform as a whole is actively searching for its new low-entropy state. We're going to need a lot of prototypes to get from here to there. In that spirit, may your prototypes be numerous and soon replaced. Happy annealing!

                                  猎豹vp加速器官网,猎豹加速器官网入口,猎豹加速器签到送一小时,  蚂蚁vp加速器官网,蚂蚁加速器官网下载,蚂蚁vqn下载官网,  十大免费加速神器,每天赠送一小时加速器,签到送时长加速器,每天试用一小时vp加速器猎豹  坚果加速器ios下载,坚果加速器vnp,坚果加速器vpm,坚果加速器vn  飞兔加速器官网网址,飞兔加速器电脑版下载,飞兔加速器mac下载,飞兔加速器vps  暴雪加速器,暴雪vp加速器官网,暴风加速器正版官网,  快鸭vp加速器,小黄鸭加速器,快鸭加速器官网,  猎豹加速器,猎豹vp永久,猎豹加速器每日签到一小时,