Insiders Information Review Legit System Or Hype - All

Insiders Information Review 2015 EDITION - Is Insiders Information SCAM? So How Does Insiders Information Software Work?? Insider Information System By Richard Vinner And Michael Williams

Insiders Information Review 2015 - HEY! Want To Know The EXACT Details About The 2015 EDITION Insiders Information System?? So What is Insiders Information Software all about? So Does Insiders Information Actually Work? Is Insiders Information Software application scam or does it really work?
To find answers to these concerns continue reading my in depth and honest Insiders Information Review below.
Insiders Information Description:
Name: Insiders Information
Niche: Binary Options.
Official Web site: Activate The NEW 2015 EDITION Insiders Information Software!! CLICK HERE NOW!!!
Exactly what is Insiders Information?
Insiders Information is essentially a binary options trading software application that is developed to assistance traders win and forecast the market trends with binary options. The software application also provides analyses of the market conditions so that traders can know what should be your next step. It gives different secret techniques that ultimately assists. traders without utilizing any complex trading indications or follow charts.
Insiders Information Binary Options Trading Strategy
Base the Insiders Information trading method. After you see it working, you can begin to execute your strategy with regular sized lots. This approach will pay off gradually. Every Forex binary options trader should select an account type that is in accordance with their needs and expectations. A bigger account does not indicate a larger revenue potential so it is an excellent concept to begin small and gradually add to your account as your returns increase based upon the trading choices you make.
Binary Options Trading
To help you trade binary options properly, it is essential to have an understanding behind the basics of Binary Options Trading. Currency Trading, or forex, is based on the perceived value of. 2 currencies relative to one another, and is affected by the political stability of the country, inflation and interest rates to name a few things. Keep this in mind as you trade and discover more about binary options to maximize your learning experience.
Insiders Information System is a multi Award Winning Financial Software
Insiders Information team will guarantee your success and place you on the road to making a great income by trading binary options The insiders information Team: CEO and Founder: Richard Vinner Co Founder And Senior Developer: Michael Williams Head Financial Analyst: Bianca Mills Junior Developer: Rey Benedict Hep desk support specialists: Tanya Kibble QA Automation Manager: Stewart Resinsky
Insiders Information Summary
In summary, there are some obvious ideas that have actually been tested over time, along with some more recent techniques. that you might not have considered. Ideally, as long as you follow what we recommend in this post you can either start with trading with Insiders Information or enhance on exactly what you have actually currently done.
The Insiders Information 2014 EDITION Sold For $1999 But for a VERY LIMITED Time You Will Get INSTANT FREE Access To The NEW Insiders Information 2015 EDITION
Click Here To Claim Your Insiders Information 2015 EDITION User License!!
Are You Looking For A Insiders Information Alternative?? CLICK HERE NOW!
Tags: Insiders Information app, Insiders Information information, Insiders Information url, Insiders Information website, Insiders Information trading software, get Insiders Information, article about Insiders Information, Insiders Information computer program Insiders Information support, Insiders Information support email address, Insiders Information help desk, similar than Insiders Information, better than Insiders Information, Insiders Information contact, Insiders Information demo, Insiders Information video tutorial, how does Insiders Information work, is Insiders Information the best online is Insiders Information a scam, does Insiders Information really work, does Insiders Information actually work, Insiders Information members area, Insiders Information login page, Insiders Information verification, Insiders Information software reviews, Insiders Information no fake review, Insiders Information Live Broadcast, is Insiders Information real, Insiders Information forex trading, Insiders Information binary options trading, Insiders Information automated app, the Insiders Information review, Insiders Information signals, Insiders Information mac os x, Insiders Information broker sign up, Insiders Information free download, reviews of Insiders Information, Insiders Information bonus, Insiders Information honest review, Insiders Information 2015, is Insiders Information worth the risk, Insiders Information pc desktop, Insiders Information free trial, Insiders Information testimonial, Insiders Information warrior forum, Insiders Information web version, Insiders Information open a account, Insiders Information laptop, Insiders Information revised Method 2015, Insiders Information discount, Insiders Information youtube, seriously will Insiders Information work, Insiders Information facebook, Insiders Information activation code, Insiders Information 2015 Working, Insiders Information As seen On Tv: Bloomberg, Channel 7 News, Fox NEWS and CNBC NEWS, Insiders Information twitter, Insiders Information currency trading, Insiders Information example trade, will Insider Information work on mobile phone, Completely New Insiders Information, new Insiders Information, Insiders Information webinar, will Insiders Information help me, real truth about Insider Information, Insiders Information System, Insider Information inside members page, Insider Information 2015 Traders Launch, how to download Insider Information, how to access Insider Information, Insiders Information Robot, how to use Insiders Information, how to trade with Insiders Information, Insiders Information NEWS Update and details, Insiders Information today, Insiders Information feedback, Insiders Information real user review, Insiders Information customer reviews, Insiders Information consumer review, Insiders Information Review 2015, By Richard Vinner, Review By Richard Vinner, review, reviews, Insiders Information doesn't work, is Insiders Information another scam or legit, Insiders Information refund, Activate Insider Information, review of Insiders Information, Insiders Information test, Insiders Information explanation, what is Insiders Information, Insiders Information news, new version of Insider Information, Insiders Information fan Page, Insiders Information breaking news, should i use Insiders Information, Insider Information yes or no, do i need trading experience, Insiders Information create account, Insiders Information instructions, Insiders Information Secret method, Join Insiders Information, Insiders Information ea trading app, Insiders Information limited time, Insider Information pros and cons, Insiders Information bad reviews, Insider Information negative and positive review, Insiders Information Author, Insiders Information creator, who made Insiders Information, what is the Insiders Information, Insiders Information real review, Insiders Information strategy, Insiders Information password reset, Insiders Information beta tester, Insider Information comparison, Insiders Information questions and answers, Insiders Information results, Insiders Information winning and losing trades, Insiders Information overview, Insiders Information training, how to setup Insiders Information, start trading with Insiders Information, Insiders Information proof, Insiders Information the truth, Get Insiders Information, Insiders Information Review
Click Here To Download Insiders Information Right NOW!
submitted by MikelBatte to MikelBatte [link] [comments]

The Next Processor Change is Within ARMs Reach

As you may have seen, I sent the following Tweet: “The Apple ARM MacBook future is coming, maybe sooner than people expect”
Today, I would like to further elaborate on that.
tl;dr Apple will be moving to Arm based macs in what I believe are 4 stages, starting around 2015 and ending around 2023-2025: Release of T1 chip Macbooks, release of T2 chip Macbooks, Release of at least one lower end model Arm Macbook, and transitioning full lineup to Arm. Reasons for each are below.
Apple is very likely going to switch to switch their CPU platform to their in-house silicon designs with an ARM architecture. This understanding is a fairly common amongst various Apple insiders. Here is my personal take on how this switch will happen and be presented to the consumer.
The first question would likely be “Why would Apple do this again?”. Throughout their history, Apple has already made two other storied CPU architecture switches - first from the Motorola 68k to PowerPC in the early 90s, then from PowerPC to Intel in the mid 2000s. Why make yet another? Here are the leading reasons:
A common refrain heard on the Internet is the suggestion that Apple should switch to using CPUs made by AMD, and while this has been considered internally, it will most likely not be chosen as the path forward, even for their megalithic giants like the Mac Pro. Even though AMD would mitigate Intel’s current set of problems, it does nothing to help the issue of the x86_64 architecture’s problems and inefficiencies, on top of jumping to a platform that doesn’t have a decade of proven support behind it. Why spend a lot of effort re-designing and re- optimizing for AMD’s platform when you can just put that effort into your own, and continue the vertical integration Apple is well-known for?
I believe that the internal development for the ARM transition started around 2015/2016 and is considered to be happening in 4 distinct stages. These are not all information from Apple insiders; some of these these are my own interpretation based off of information gathered from supply-chain sources, examination of MacBook schematics, and other indicators from Apple.

Stage1 (from 2014/2015 to 2017):

The rollout of computers with Apple’s T1 chip as a coprocessor. This chip is very similar to Apple’s T8002 chip design, which was used for the Apple Watch Series 1 and Series 2. The T1 is primarily present on the first TouchID enabled Macs, 2016 and 2017 model year MacBook Pros.
Considering the amount of time required to design and validate a processor, this stage most likely started around 2014 or 2015, with early experimentation to see whether an entirely new chip design would be required, or if would be sufficient to repurpose something in the existing lineup. As we can see, the general purpose ARM processors aren’t a one- trick pony.
To get a sense of the decision making at the time, let’s look back a bit. The year is 2016, and we're witnessing the beginning of stagnation of Intel processor lineup. There is not a lot to look forward to other than another “+” being added to the 14nm fabrication process. The MacBook Pro has used the same design for many years now, and its age is starting to show. Moving to AMD is still very questionable, as they’ve historically not been able to match Intel’s performance or functionality, especially at the high end, and since the “Ryzen” lineup is still unreleased, there is absolutely no benchmarks or other data to show they are worth consideration, and AMD’s most recent line of “Bulldozer” processors were very poorly received. Now is probably as good a time as any to begin experimenting with the in-house ARM designs, but it’s not time to dive into the deep end yet, our chips are not nearly mature enough to compete, and it’s not yet certain how long Intel will be stuck in the mud. As well, it is widely understood that Apple and Intel have an exclusivity contract in exchange for advantageous pricing. Any transition would take considerable time and effort, and since there are no current viable alternative to Intel, the in-house chips will need to advance further, and breaching a contract with Intel is too great a risk. So it makes sense to start with small deployments, to extend the timeline, stretch out to the end of the contract, and eventually release a real banger of a Mac.
Thus, the 2016 Touch Bar MacBooks were born, alongside the T1 chip mentioned earlier. There are good reasons for abandoning the piece of hardware previously used for a similar purpose, the SMC or System Management Controller. I suspect that the biggest reason was to allow early analysis of the challenges that would be faced migrating Mac built- in peripherals and IO to an ARM-based controller, as well as exploring the manufacturing, power, and performance results of using the chips across a broad deployment, and analyzing any early failure data, then using this to patch any issues, enhance processes, and inform future designs looking towards the 2nd stage.
The former SMC duties now moved to T1 includes things like
The T1 chip also communicates with a number of other controllers to manage a MacBook’s behavior. Even though it’s not a very powerful CPU by modern standards, it’s already responsible for a large chunk of the machine’s operation. Moving control of these peripherals to the T1 chip also brought about the creation of the fabled BridgeOS software, a shrunken-down watchOS-based system that operates fully independently of macOS and the primary Intel processor.
BridgeOS is the first step for Apple’s engineering teams to begin migrating underlying systems and services to integrate with the ARM processor via BridgeOS, and it allowed internal teams to more easily and safely develop and issue firmware updates. Since BridgeOS is based on a standard and now well-known system, it means that they can leverage existing engineering expertise to flesh out the T1’s development, rather than relying on the more arcane and specialized SMC system, which operates completely differently and requires highly specific knowledge to work with. It also allows reuse of the same fabrication pipeline used for Apple Watch processors, and eliminated the need to have yet another IC design for the SMC, coming from a separate source, to save a bit on cost.
Also during this time, on the software side, “Project Marzipan”, today Catalyst, came into existence. We'll get to this shortly.
For the most part, this Stage 1 went without any major issues. There were a few firmware problems at first during the product launch, but they were quickly solved with software updates. Now that engineering teams have had experience building for, manufacturing, and shipping the T1 systems, Stage 2 would begin.

Stage2 (2018-Present):

Stage 2 encompasses the rollout of Macs with the T2 coprocessor, replacing the T1. This includes a much wider lineup, including MacBook Pro with Touch Bar, starting with 2018 models, MacBook Air starting with 2018 models, the iMac Pro, the 2019 Mac Pro, as well as Mac Mini starting in 2018.
With this iteration, the more powerful T8012 processor design was used, which is a further revision of the T8010 design that powers the A10 series processors used in the iPhone 7. This change provided a significant increase in computational ability and brought about the integration of even more devices into T2. In addition to the T1’s existing responsibilities, T2 now controls:
Those last 2 points are crucial for Stage 2. Under this new paradigm, the vast majority of the Mac is now under the control of an in-house ARM processor. Stage 2 also brings iPhone-grade hardware security to the Mac. These T2 models also incorporated a supported DFU (Device Firmware Update, more commonly “recovery mode”), which acts similarly to the iPhone DFU mode and allows restoration of the BridgeOS firmware in the event of corruption (most commonly due to user-triggered power interruption during flashing).
Putting more responsibility onto the T2 again allows for Apple’s engineering teams to do more early failure analysis on hardware and software, monitor stability of these machines, experiment further with large-scale production and deployment of this ARM platform, as well as continue to enhance the silicon for Stage 3.
A few new user-visible features were added as well in this stage, such as support for the passive “Hey Siri” trigger, and offloading image and video transcoding to the T2 chip, which frees up the main Intel processor for other applications. BridgeOS was bumped to 2.0 to support all of these changes and the new chip.
On the macOS software side, what was internally known as Project Marzipan was first demonstrated to the public. Though it was originally discovered around 2017, and most likely began development and testing within later parts of Stage 1, its effects could be seen in 2018 with the release of iPhone apps, now running on the Mac using the iOS SDKs: Voice Recorder, Apple News, Home, Stocks, and more, with an official announcement and public release at WWDC in 2019. Catalyst would come to be the name of Marzipan used publicly. This SDK release allows app developers to easily port iOS apps to run on macOS, with minimal or no code changes, and without needing to develop separate versions for each. The end goal is to allow developers to submit a single version of an app, and allow it to work seamlessly on all Apple platforms, from Watch to Mac. At present, iOS and iPadOS apps are compiled for the full gamut of ARM instruction sets used on those devices, while macOS apps are compiled for x86_64. The logical next step is to cross this bridge, and unify the instruction sets.
With this T2 release, the new products using it have not been quite as well received as with the T1. Many users have noticed how this change contributes further towards machines with limited to no repair options outside of Apple’s repair organization, as well as some general issues with bugs in the T2.
Products with the T2 also no longer have the “Lifeboat” connector, which was previously present on 2016 and 2017 model Touch Bar MacBook Pro. This connector allowed a certified technician to plug in a device called a CDM Tool (Customer Data Migration Tool) to recover data off of a machine that was not functional. The removal of this connector limits the options for data recovery in the event of a problem, and Apple has never offered any data recovery service, meaning that a irreparable failure of the T2 chip or the primary board would result in complete data loss, in part due to the strong encryption provided by the T2 chip (even if the data got off, the encryption keys were lost with the T2 chip). The T2 also brought about the linkage of component serial numbers of certain internal components, such as the solid state storage, display, and trackpad, among other components. In fact, many other controllers on the logic board are now also paired to the T2, such as the WiFi and Bluetooth controller, the PMIC (Power Management Controller), and several other components. This is the exact same system used on newer iPhone models and is quite familiar to technicians who repair iPhone logic boards. While these changes are fantastic for device security and corporate and enterprise users, allowing for a very high degree of assurance that devices will refuse to boot if tampered with in any way - even from storied supply chain attacks, or other malfeasance that can be done with physical access to a machine - it has created difficulty with consumers who more often lack the expertise or awareness to keep critical data backed up, as well as the funds to perform the necessary repairs from authorized repair providers. Other issues reported that are suspected to be related to T2 are audio “cracking” or distortion on the internal speakers, and the BridgeOS becoming corrupt following a firmware update resulting in a machine that can’t boot.
I believe these hiccups will be properly addressed once macOS is fully integrated with the ARM platform. This stage of the Mac is more like a chimera of an iPhone and an Intel based computer. Technically, it does have all of the parts of an iPhone present within it, cellular radio aside, and I suspect this fusion is why these issues exist.
Recently, security researchers discovered an underlying security problem present within the Boot ROM code of the T1 and T2 chip. Due to being the same fundamental platform as earlier Apple Watch and iPhone processors, they are vulnerable to the “checkm8” exploit (CVE-2019-8900). Because of how these chips operate in a Mac, firmware modifications caused by use of the exploit will persist through OS reinstallation and machine restarts. Both the T1 and T2 chips are always on and running, though potentially in a heavily reduced power usage state, meaning the only way to clean an exploited machine is to reflash the chip, triggering a restart, or to fully exhaust or physically disconnect the battery to flush its memory. Fortunately, this exploit cannot be done remotely and requires physical access to the Mac for an extended duration, as well as a second Mac to perform the change, so the majority of users are relatively safe. As well, with a very limited execution environment and access to the primary system only through a “mailbox” protocol, the utility of exploiting these chips is extremely limited. At present, there is no known malware that has used this exploit. The proper fix will come with the next hardware revision, and is considered a low priority due to the lack of practical usage of running malicious code on the coprocessor.
At the time of writing, all current Apple computers have a T2 chip present, with the exception of the 2019 iMac lineup. This will change very soon with the expected release of the 2020 iMac lineup at WWDC, which will incorporate a T2 coprocessor as well.
Note: from here on, this turns entirely into speculation based on info gathered from a variety of disparate sources.
Right now, we are in the final steps of Stage 2. There are strong signs that an a MacBook (12”) with an ARM main processor will be announced this year at WWDC (“One more thing...”), at a Fall 2020 event, Q1 2021 event, or WWDC 2021. Based on the lack of a more concrete answer, WWDC2020 will likely not see it, but I am open to being wrong here.

Stage3 (Present/2021 - 2022/2023):

Stage 3 involves the first version of at least one fully ARM-powered Mac into Apple’s computer lineup.
I expect this will come in the form of the previously-retired 12” MacBook. There are rumors that Apple is still working internally to perfect the infamous Butterfly keyboard, and there are also signs that Apple is developing an A14x based processors with 8-12 cores designed specifically for use as the primary processor in a Mac. It makes sense that this model could see the return of the Butterfly keyboard, considering how thin and light it is intended to be, and using an A14x processor would make it will be a very capable, very portable machine, and should give customers a good taste of what is to come.
Personally, I am excited to test the new 12" “ARMbook”. I do miss my own original 12", even with all the CPU failure issues those older models had. It was a lovely form factor for me.
It's still not entirely known whether the physical design of these will change from the retired version, exactly how many cores it will have, the port configuration, etc. I have also heard rumors about the 12” model possibly supporting 5G cellular connectivity natively thanks to the A14 series processor. All of this will most likely be confirmed soon enough.
This 12” model will be the perfect stepping stone for stage 3, since Apple’s ARM processors are not yet a full-on replacement for Intel’s full processor lineup, especially at the high end, in products such as the upcoming 2020 iMac, iMac Pro, 16” MacBook Pro, and the 2019 Mac Pro.
Performance of Apple’s ARM platform compared to Intel has been a big point of contention over the last couple years, primarily due to the lack of data representative of real-world desktop usage scenarios. The iPad Pro and other models with Apple’s highest-end silicon still lack the ability to execute a lot of high end professional applications, so data about anything more than video editing and photo editing tasks benchmarks quickly becomes meaningless. While there are completely synthetic benchmarks like Geekbench, Antutu, and others, to try and bridge the gap, they are very far from being accurate or representative of the real real world performance in many instances. Even though the Apple ARM processors are incredibly powerful, and I do give constant praise to their silicon design teams, there still just isn’t enough data to show how they will perform for real-world desktop usage scenarios, and synthetic benchmarks are like standardized testing: they only show how good a platform is at running the synthetic benchmark. This type of benchmark stresses only very specific parts of each chip at a time, rather than how well it does a general task, and then boil down the complexity and nuances of each chip into a single numeric score, which is not a remotely accurate way of representing processors with vastly different capabilities and designs. It would be like gauging how well a person performs a manual labor task based on averaging only the speed of every individual muscle in the body, regardless of if, or how much, each is used. A specific group of muscles being stronger or weaker than others could wildly skew the final result, and grossly misrepresent performance of the person as a whole. Real world program performance will be the key in determining the success and future of this transition, and it will have to be great on this 12" model, but not just in a limited set of tasks, it will have to be great at *everything*. It is intended to be the first Horseman of the Apocalypse for the Intel Mac, and it better behave like one. Consumers have been expecting this, especially after 15 years of Intel processors, the continued advancement of Apple’s processors, and the decline of Intel’s market lead.
The point of this “demonstration” model is to ease both users and developers into the desktop ARM ecosystem slowly. Much like how the iPhone X paved the way for FaceID-enabled iPhones, this 12" model will pave the way towards ARM Mac systems. Some power-user type consumers may complain at first, depending on the software compatibility story, then realize it works just fine since the majority of the computer users today do not do many tasks that can’t be accomplished on an iPad or lower end computer. Apple needs to gain the public’s trust for basic tasks first, before they will be able to break into the market of users performing more hardcore or “Pro” tasks. This early model will probably not be targeted at these high-end professionals, which will allow Apple to begin to gather early information about the stability and performance of this model, day to day usability, developmental issues that need to be addressed, hardware failure analysis, etc. All of this information is crucial to Stage 4, or possibly later parts of Stage 3.
The 2 biggest concerns most people have with the architecture change is app support and Bootcamp.
Any apps released through the Mac App Store will not be a problem. Because App Store apps are submitted as LLVM IR (“Bitcode”), the system can automatically download versions compiled and optimized for ARM platforms, similar to how App Thinning on iOS works. For apps distributed outside the App Store, thing might be more tricky. There are a few ways this could go:
As for Bootcamp, while ARM-compatible versions of Windows do exist and are in development, they come with their own similar set of app support problems. Microsoft has experimented with emulating x86_64 on their ARM-based Surface products, and some other OEMs have created their own Windows-powered ARM laptops, but with very little success. Performance is a problem across the board, with other ARM silicon not being anywhere near as advanced, and with the majority of apps in the Windows ecosystem that were not developed in-house at Microsoft running terribly due to the x86_64 emulation software. If Bootcamp does come to the early ARM MacBook, it more than likely will run like very poorly for anything other than Windows UWP apps. There is a high chance it will be abandoned entirely until Windows becomes much more friendly to the architecture.
I believe this will also be a very crucial turning point for the MacBook lineup as a whole. At present, the iPad Pro paired with the Magic Keyboard is, in many ways, nearly identical to a laptop, with the biggest difference being the system software itself. While Apple executives have outright denied plans of merging the iPad and MacBook line, that could very well just be a marketing stance, shutting the down rumors in anticipation of a well-executed surprise. I think that Apple might at least re-examine the possibility of merging Macs and iPads in some capacity, but whether they proceed or not could be driven by consumer reaction to both products. Do they prefer the feel and usability of macOS on ARM, and like the separation of both products? Is there success across the industry of the ARM platform, both at the lower and higher end of the market? Do users see that iPadOS and macOS are just 2 halves of the same coin? Should there be a middle ground, and a new type of product similar to the Surface Book, but running macOS? Should Macs and iPads run a completely uniform OS? Will iPadOS ever see exposed the same sort of UNIX-based tools for IT administrators and software developers that macOS has present? These are all very real questions that will pop up in the near future.
The line between Stage 3 and Stage 4 will be blurry, and will depend on how Apple wishes to address different problems going forward, and what the reactions look like. It is very possible that only 12” will be released at first, or a handful more lower end model laptop and desktop products could be released, with high performance Macs following in Stage 4, or perhaps everything but enterprise products like Mac Pro will be switched fully. Only time will tell.

Stage 4 (the end goal):

Congratulations, you’re made it to the end of my TED talk. We are now well into the 2020s and COVID-19 Part 4 is casually catching up to the 5G = Virus crowd. All Macs have transitioned fully to ARM. iMac, MacBooks Pro and otherwise, Mac Pro, Mac Mini, everything. The future is fully Apple from top to bottom, and vertical integration leading to market dominance continues. Many other OEM have begun to follow in this path to some extent, creating more demand for a similar class of silicon from other firms.
The remainder here is pure speculation with a dash of wishful thinking. There are still a lot of things that are entirely unclear. The only concrete thing is that Stage 4 will happen when everything is running Apple’s in- house processors.
By this point, consumers will be quite familiar with the ARM Macs existing, and developers have had have enough time to transition apps fully over to the newly unified system. Any performance, battery life, or app support concerns will not be an issue at this point.
There are no more details here, it’s the end of the road, but we are left with a number of questions.
It is unclear if Apple will stick to AMD's GPUs or whether they will instead opt to use their in-house graphics solutions that have been used since the A11 series of processors.
How Thunderbolt support on these models of Mac will be achieved is unknown. While Intel has made it openly available for use, and there are plans to have USB and Thunderbolt combined in a single standard, it’s still unclear how it will play along with Apple processors. Presently, iPhones do support connecting devices via PCI Express to the processor, but it has only been used for iPhone and iPad storage. The current Apple processors simply lack the number of lanes required for even the lowest end MacBook Pro. This is an issue that would need to be addressed in order to ship a full desktop-grade platform.
There is also the question of upgradability for desktop models, and if and how there will be a replaceable, socketed version of these processors. Will standard desktop and laptop memory modules play nicely with these ARM processors? Will they drop standard memory across the board, in favor of soldered options, or continue to support user-configurable memory on some models? Will my 2023 Mac Pro play nicely with a standard PCI Express device that I buy off the shelf? Will we see a return of “Mac Edition” PCI devices?
There are still a lot of unknowns, and guessing any further in advance is too difficult. The only thing that is certain, however, is that Apple processors coming to Mac is very much within arm’s reach.
submitted by Fudge_0001 to apple [link] [comments]

A trans person's measured take on the trans sports issue

So first of all this post was inspired by GGExMachina's brief statement on the issue:
For example, it is objectively the case that biological men have a physical advantage over women. Yet if someone points this out and suggests that transgender people shouldn’t be allowed to fight in women’s UFC, or women’s soccer or weightlifting competitions or whatever, suddenly you’re some kind of evil monster. Rather than saying that of course trans people shouldn’t be bullied and that we could perhaps have a trans olympics (like the Paralympics and Special Olympics), we are expected to lie.
I've found that this position is incredibly popular among liberals/left-leaning people, especially here on reddit. It seems like, once or twice a month, like clockwork, a thread stating more or less the same thing on /unpopularopinion or /offmychest will get thousands of upvotes. And while I completely understand the thought process that leads otherwise left-leaning people to come to such conclusions, I feel like the issue has been, broadly speaking, dishonestly presented to the general public by a mixture of bad-faith actors and people who have succumbed to the moral panic. And, as I've seen, there are plenty of people in this subreddit and elsewhere who are itching to be as supportive as they possibly can to the trans community but find themselves becoming very disillusioned by this particular issue. By making this post I hope to present a more nuanced take on the issue, not only in regards to my personal beliefs on what kinds of policies are best to preserve fairness in women's sports but also in regards to shining a light on how this issue is often times dishonestly presented in an attempt to impede the progression of pro-trans sentiments in the cultural zeitgeist.

Sex & Gender

The word "transgender" is an umbrella term that refers to people whose gender identities differ from those typically associated with the sex they were assigned at birth. According to the 2015 U.S. Transgender Survey, the approximate composition of "the trans community" in the United States is 29% Transgender men (Female-to-Male), 33% Transgender women (Male-to-Female), and 35% non-binary. (The remaining 3% were survey respondents who self-identified as "crossdressers", who were still included in the survey on the grounds of being gender non-conforming)
While non-binary people, as a group, are probably deserving of their own separate post. the focus of this post will be on trans men and trans women. I will also be primarily focusing on transgender people who pursue medical transition with Hormone-Replacement-Therapy, as they are most relevant to the issue of sports. (Mind that while the majority of binary trans people fit into this camp, there is a sizable minority of trans people who do not feel the need to medically transition.)
What do trans people believe about Gender?
The views of transgender people in regards to Gender are actually pretty varied, although the most prominent positions that I've personally seen are best summed up into two different camps:
  1. The "Trans-Medical" camp
Transgender people who fall into this camp usually consider Gender Dysphoria to be the defining factor of what makes somebody trans. The best way I can describe this camp is that they sort of view being transgender akin to being intersex. Only whereas an intersex person would be born with a disorder that affects the body, a trans person is born with a disorder that affects the brain. Trans people in this camp often times put an emphasis on a clinical course for treatment. For example, a person goes to a psychologist, gets diagnosed with gender dysphoria, starts hormone replacement therapy, pursues surgery, then emerges from this process of either cured of the gender dysphoria or, at the very least, treated to the fullest extent of medical intervention. This position is more or less the original position held by trans activists, back in the day when the word "transsexual" was used instead of "transgender". Though many younger trans people, notably YouTuber Blaire White, also hold this position. Under this position, sex and gender are still quite intertwined, but a trans man can still be considered a man, and a trans woman a woman, under the belief that sex/gender doesn't just refer to chromosomal sex and reproductive organs, but also to neurobiology, genitalia, and secondary sex characteristics. So someone who is transgender, according to this view, is born with the physical characteristics of one sex/gender but the neurobiology of another, and will change their physical characteristics, to the fullest extent medically possible, to match the neurobiology and therefore cure the individual of gender dysphoria.
Critics of this position argue that this mentality is problematic due to being inherently exclusive to transgender people who do not pursue medical transition, whom are often times deemed as "transtrenders" by people within this camp. Many people find it additionally problematic because it is also inherently exclusive to poorer trans people, particularly those in developing nations, who may not have access to trans-related medical care. Note that there are plenty of trans people who *do* have access to medical transition, but nevertheless feel as if the trans community shouldn't gatekeep people who cannot afford or do not desire medical transition, thus believing in the latter camp.
  1. The "Gender Identity" camp
I feel like this camp is the one most popularly criticized by people on the right, but is also probably the most mainstream. It is the viewpoint held by many more left-wing trans people, (Note that in the aforementioned 2015 survey, only 1% of trans respondents voted Republican, so trans people are largely a pretty left-wing group, therefore it makes sense that this position would be the most mainstream) but also notably held by American Psychological Association, the American Psychiatric Association, GLAAD, and other mainstream health organizations and activist groups.
While people in this camp still acknowledge that medical transition to treat gender dysphoria can still be a very important aspect of the transgender experience, it's believed that the *defining* experience is simply having a gender identity different from the one they were assigned at birth. "Gender identity" simply being the internal, personal sense of being a man, a woman, or outside the gender binary.
Many people in this camp, though, still often maintain that gender identity is (at least partially) neurobiological, but differ from the first camp in regards to acknowledging that the issue is less black & white than an individual simply having a "male brain" or a "female brain", but rather that the neurological characteristics associated with gender exist on more of a spectrum, thus leaving the door open to gender non-conforming people who do not identify as trans, as well as to non-binary people. This is where the "gender is a spectrum" phrase comes from.
"52 genders" is a popular right-wing meme that makes fun of this viewpoint, however it is important to note that many trans and non-binary people disagree with the idea of quantifying gender identity to such an absurd amount of individual genders, rather more simply maintaining that there are men, women, and a small portion of people in-between, with a few words such as "agender" or "genderqueer" being used to describe specific identities/presentations within this category.
It's also noteworthy that not all people in this camp believe that neurobiology is the be-all-end-all of gender identity, as many believe that the performativity of gender also plays an integral role in one's identity. (That gender identity is a mixture of neurobiology and performativity is a position held by YouTuber Contrapoints)
Trans people and biological sex
So while the aforementioned "Gender Identity" viewpoint has become quite popularized among liberals and leftists, I have noticed a certain rhetorical mentality/assumption become prevalent alongside it, especially among cisgender people who consider themselves trans-allies:
"Sex and Gender are different. A trans woman is a woman who is biologically male. A trans man is a man who is biologically female"
When "Sex" is defined by someone's chromosomes, or the sex organs they were born with, this is correct. However, there is a pretty good reason why the trans community tends to prefer terms like "Assigned Male at Birth" rather than "Biologically Male". This is done not only for the inclusion of people who are both intersex and transgender (For example, someone can be born intersex but assigned male based on the existence of a penis or micropenis), but also due to the aforementioned viewpoint on divergent neurobiology being the cause for gender dysphoria. Those reasons are why the word "Assigned" is used. But the reason why it's "Assigned Male/Female At Birth" instead of just "Assigned Male/Female" is because among the trans community there exists an understanding of the mutability of sexually dimorphic biology that the general population is often ignorant to. For example, often times people (especially older folks) don't even know of the existence of Hormone Replacement Therapy, and simply assume that trans people get a single "sex change operation" that, (for a trans woman) would just entail the removal of the penis and getting breast implants. Therefore they imagine the process to be "medically sculpting a male to look female" instead of a more natural biological process of switching the endocrine system form male to female or vice versa and letting the body change over the course of multiple years. It doesn't help that, for a lot of older trans people (namely Caitlyn Jenner, who is probably the most high profile trans person sadly), the body can be a lot more resistant to change even with hormones so they *do* need to rely on plastic surgery a lot more to get obvious results)
So what sexually dimorphic bodily characteristics can one expect to change from Hormone Replacement Therapy?
(Note that there is a surprising lack of studies done on some of the more intricate changes that HRT can, so I've put a "*" next to the changes that are anecdotal, but still commonly and universally observed enough among trans people [including myself for the MTF stuff] to consider factual. I've also put a "✝" next to the changes that only occur when people transition before or during puberty)
Male to Female:
Female to Male:
For the sake of visual representation, here are a couple of images from /transtimelines to demonstrate these changes in adult transitioners (I've specifically chosen athletic individuals to best demonstrate muscular changes)
Additionally, here's a picture of celebrity Kim Petras who transitioned before male puberty, in case you were wondering what "female pubescent skeletal development" looks like in a trans woman:

How does this relate to sports?

Often times, when the whole "transgender people in sports" discussion arises, a logical error is made when *all* transgender people are assumed to be "biologically" their birth sex. For example, when talking about trans women participating in female sports, these instances will be referred to as cases of "Biological males competing against females".
As mentioned before, calling a trans woman "biologically male" strictly in regards to chromosomes or sex organs at birth would be correct. However, not only can it be considered derogatory (the word "male" is colloquially a shorthand for "man", after all), but there are many instances where calling a post-HRT transgender person "biologically [sex assigned at birth]" is downright misleading.
For example, hospitals have, given transgender patients improper or erroneous medical care by assuming treatment based on birth sex where treatment based on their current endocrinological sex would have been more adequate.
Acute Clinical Care of Transgender Patients: A Review
Conclusions and relevance: Clinicians should learn how to engage with transgender patients, appreciate that unique anatomy or the use of gender-affirming hormones may affect the prevalence of certain disease (eg, cardiovascular disease, venous thromboembolism, and osteoporosis), and be prepared to manage specific issues, including those related to hormone therapy. Health care facilities should work toward providing inclusive systems of care that correctly identify and integrate information about transgender patients into the electronic health record, account for the unique needs of these patients within the facility, and through education and policy create a welcoming environment for their care.
Some hosptials have taken to labeling the biological sex of transgender patients as "MTF" (for post-HRT trans women) and "FTM" (for post-HRT trans men), which is a much more medically useful identifier compared to their sex assigned at birth.
In regards to the sports discussion, I've seen *multiple threads* where redditors have backed up their opinions on the subject of trans people in sports with studies demonstrating that cis men are, on average, more athletically capable than cis women. Which I personally find to be a pathetic misunderstanding of the entire issue.
Because we're not supposed to be comparing the athletic capabilities of natal males to natal females, here. We're supposed to comparing the athletic capabilities of *post-HRT male-to-females* to natal females. And, if we're going to really have a fact-based discussion on the matter, we need to have separate categories for pre-pubescent and post-pubescent transitioners. Since, as mentioned earlier, the former will likely have different skeletal characteristics compared to the latter.
The current International Olympic Committee (IOC) model for trans participation, and criticisms of said model
(I quoted the specific guidelines from the International Cycling Union, but similar guidelines exist for all Olympic sports)
Elite Competition
At elite competition levels, members may have the opportunity to represent the United States and participate in international competition. They may therefore be subject to the policies and regulations of the International Cycling Union (UCI) and International Olympic Committee (IOC). USA Cycling therefore follows the IOC guidelines on transgender athletes at these elite competition levels. For purposes of this policy, international competition means competition sanctioned by the UCI or competition taking place outside the United States in which USA Cycling’s competition rules do not apply.
The IOC revised its guidelines on transgender athlete participation in 2015, to focus on hormone levels and medical monitoring. The main points of the guidelines are:
Those who transition from female to male are eligible to compete in the male category without restriction. It is the responsibility of athletes to be aware of current WADA/USADA policies and file for appropriate therapeutic use exemptions.
Those who transition from male to female are eligible to compete in the female category under the following conditions:
The athlete has declared that her gender identity is female. The declaration cannot be changed, for sporting purposes, for a minimum of four years.
The athlete must demonstrate that her total testosterone level in serum has been below 10 nmol/L for at least 12 months prior to her first competition (with the requirement for any longer period to be based on a confidential case-by-case evaluation, considering whether or not 12 months is a sufficient length of time to minimize any advantage in women’s competition).
The athlete's total testosterone level in serum must remain below 10 nmol/L throughout the period of desired eligibility to compete in the female category.
Compliance with these conditions may be monitored by random or for-cause testing. In the event of non-compliance, the athlete’s eligibility for female competition will be suspended for 12 months.
Valid criticisms of the IOC model are usually based on the fact that, even though hormone replacement therapy provokes changes to muscle mass, it does *not* shrink the size of someone's skeleton or cardiovascular system. Therefore an adult-transitioned trans woman could, even after losing all levels of male-typical muscle mass, still have an advantage in certain sports if she had an excessively large skeletal frame, and was participating in a sport where such a thing would be advantageous.
Additionally, the guidelines only require that athletes be able to demonstrate having had female hormone levels for 12-24 months, which isn't necessarily long enough to completely lose musculature gained from training on testosterone (anecdotally it can take 2-4 years to completely lose male-typical muscle mass) So the IOC guidelines don't have any safeguard against, for example, a trans woman training with testosterone as the dominant hormone in her body, and then taking hormones for the bare minimum time period and still having some of the advantage left.
Note that, while lower level sports have had (to the glee of right-wing publications sensationalizing the issue) instances of this exact thing happening, in the 16 years since these IOC guidelines were established, not a single transgender individual has won an Olympic medal
Also note that none of the above criticisms of the IOC policy would apply in regards to the participation of pre-pubescent-transitioned trans women. After all, male-pubescent bone structure and cardiovascular size, and male-typical muscle levels, can't possibly exist if you never went through male puberty to begin with.
What could better guidelines entail, to best preserve fairness in female sports while avoiding succumbing to anti-trans moral panic?
In my personal opinion, sports leagues should pick one of the three above options depending on what best fits the nature of the sport and the eliteness of the competition. For example, extremely competitive contact sports might be better off going with the first option, but an aerobic sport such as marathon running would probably be fine with the third option.

How this issue has been misrepresented by The Right

I'll use Joe Rogan as an example of this last thing:
She calls herself a woman but... I tend to disagree. And, uh, she, um... she used to be a man but now she has had, she's a transgender which is (the) official term that means you've gone through it, right? And she wants to be able to fight women in MMA. I say no f***ing way.
I say if you had a dick at one point in time, you also have all the bone structure that comes with having a dick. You have bigger hands, you have bigger shoulder joints. You're a f***ing man. That's a man, OK? You can't have... that's... I don't care if you don't have a dick any more...
If you want to be a woman in the bedroom and you know you want to play house and all of that other s*** and you feel like you have, your body is really a woman's body trapped inside a man's frame and so you got a operation, that's all good in the hood. But you can't fight chicks. Get the f*** out of here. You're out of your mind. You need to fight men, you know? Period. You need to fight men your size because you're a man. You're a man without a dick.
I'm not trying to discriminate against women in any way, shape, or form and I'm a big supporter of women's fighting. I loved watching that Ronda Rousey/Liz Carmouche fight. But those are actual women. Those are actual women. And as strong as Ronda Rousey looks, she's still looks to me like a pretty girl. She's a beautiful girl who happens to be strong. She's a girl! [Fallon Fox] is not a girl, OK? This is a [transgender] woman. It's a totally different specification.
Calling a trans woman a "man", and equating transitioning to merely removal of the dick, and equating trans women's experiences as women as "playing house" and "being a woman in the bedroom". These things are obviously pretty transphobic, and if Rogan had said these things about just any random trans woman his statements would have likely been more widely seen in that light. But when it's someone having an unfair advantage in sports, and the audience is supposed to be angry with you, it's much more socially acceptable thing to say such things. But the problem is, when you say these kinds of things about one trans woman, you're essentially saying those derogatory things about all trans women by extension. It's the equivalent of using an article about a black home invader who murdered a family as an excuse to use a racial slur.
Now, I'm not saying that Rogan necessarily did this on purpose, in fact I'm more inclined to believe that it was done moreso due to ignorance rather than having an actual ideological agenda. But since then, many right wing ideologues who do have an ideological agenda have used this issue as an excuse to voice their opinions on trans people while appearing to be less bigoted. Ie. "I'm not trying to be a bigot or anything and I accept people's rights to live their lives as they see fit, but we NEED to keep men out of women's sports", as a sly way to call trans women "men".
Additionally, doing this allows them to slip in untrue statements about the biology of trans women. I mean, first of all in regards to the statement "You have bigger hands, you have bigger shoulder joints", obviously even in regards to post-pubescent transitioners, not every trans woman is going to have bigger hands and shoulder joints than every cis woman (My hands are actually smaller than my aunt's!). It's just that people who go through male puberty on average tend to have bigger hands and shoulder joints compared to people who go through female puberty. But over-exaggerating the breadth of sexual dimorphism, as if males and females are entirely different species to each-other, helps to paint the idea of transitioning in a more nonsensical light.
I hope this thread has presented this issue in a better light for anyone reading it. Let me know if you have any thoughts/criticisms of my stances or the ways I went about this issue.
submitted by Rosa_Rojacr to samharris [link] [comments]

Monster Chapter 30

Previous Next
Mox was laying in her bunk, her body hadn't caught up to the change in time between the 2 ships but she was awake because there was no way she was missing the daily broadcast of Pirate Radio 1420 MHz no longer on planet Darkturd. Hannah knew that 1420 MHz was something about hydrogen so didn't take Mox long to figure it out. What amused Hannah and Mox both about it was what Hannah had known, it was a clear channel left open to listen for aliens on. After Hannah told her how much of a cultural thing conspiracies about aliens were to them it was sad the Stolm were who they met.
Hannah singing was a nice backdrop for the internal dialogue she was having, are we real when we are in FTL? There is always that debate around new technology that it's not possible, FTL was only the exception because even now on paper it still wasn't possible. It was like gravity in this sense, you could describe and predict it but no one could tell you what it was. You rode a bubble of reality but you weren't just moving really fast you could no longer be detected in realspace. Maybe inside the bubble you were still real but if the bubble wasn't in reality was it real anymore. She gave herself the old spacer warning that trying to figure it out while you are doing it will drive you mad but then told her self well I guess it's a good thing I already went mad. It was one of those things that her mind wouldn't let go of precisely because she didn't understand it.
She had been really happy to find out that the crew, for the most part, liked Hannah even though some of her practical jokes were a bit mean. Mox was pleased that they took it in stride that she was just some bored xeno kid but wondered what their opinions would be when they had access to the network. They would find out soon enough, Krelin's plan required Mox getting physical contact with a terminal with a direct link. Mox had mixed feelings about it, last time she turned the spyder loose to deliver it's information some of the people who received that information ended up dead. They had considered just finding hyperspace relay and hacking into it but didn't want to play that card and have the Navy increase the security on them. A Vexen colony or station made sense because it was the closest place where no one asked questions about covering your face hiding your identity from surveillance, the breathers weren't optional. Mox just didn't like back tracking 10 days to do it. She wished Hannah could get off the ship for awhile but someone in a biosuit would definitely cause unwanted attention from the authorities. Mox knew Hannah would accept it but it just couldn't be good for her to live this way for years now, she didn't know the effects of long term void travel on humans, mentally or physically.
She had finally managed to hack the system that would let her change the identity of the Captain's Mistress. It had scared the comms officer quite a bit when she started taking apart the stations on the bridge while they were in FTL, she tried not to flush every time it scared her while she was doing it. Just had to be done though because if the spyder's origin was tracked it wouldn't be hard to figure out what ship they came from.
The U.S.A.S. Exploratorem fired her cold gas thrusters to drop her into the edge of the gravity well of the nearby planetoid. It was time for her to make her cold exit from the Stolm system. She was basically a giant optical telescope and engine disguised as a space rock. Her captain's biggest worry was the observatory orbiting the smaller star of the binary so they waited until they came into close proximity with a small comet and used a cannon shot to change it's orbit as they fired the jets changing theirs. Just space junk bumping into each other nothing to see here. After a half orbit of the planetoid they would be on a path that put them out of sight of the observatory behind the large ice giant in the outer edge of the system. This was the 8th time they had entered and left the system but were worried about being detected now the xeno Navy had built up a large presence in the system.
This was the end of their rotation and they were headed all the way back to friendly space. The crew was amusing themselves trying to pick out which rock was their relief the Seeker, it was scheduled to arrive 2 days before but they had no way to know if it had. Everyone wished they were heading planet side but would gladly settle for just getting out of this tin can on the Buffalo.
They would be back in 6 months if the mission wasn't cancelled, they had made a real mess of of this system and the debris was starting to reach it's oort cloud travelling at speeds a decent percentage of light speed. The captain had nightmares about running into bodies in the debris. They had been on station during the attack and it just wasn't something you slept ok after seeing. They had to shake it off and keep going, being up close and personal with the ugly was what you signed up for when you joined a Force Recon unit. The captain thought to himself about the old saying this is why we make the big bucks. He turned his mind to the game of spy vs spy he would be playing on the Buffalo dating an enlisted trooper.
At least he wasn't one of the poor bastards captaining one of the xeno boats in system with the debris beating the hell out of the shields day and night.
14 hours and they could spool up and get out of here and then 39 days until he could get lucky.
submitted by Fornicious_Fogbottom to HFY [link] [comments]

[Guide] Homebridge UniFi Cloudkey v1 (07/2020)

A small preface, after allot of trial&error i’m finally managed to install Homebridge + Config UI X on a UniFi Cloudkey V1. I have spend many hours testing to have Homebridge running correctly. First i followed a guide from Ro3lie’s and this was partly successful, but the NodeJS was version 10.x and the serviced (running Homebridge as a service...) was not working. First NodeJS 10.x is not ideal for some Homebridge plugins (needs NodeJS 12.x), also Homebridge was not running as a service so if you restart the Cloudkey or you have a network issue you have to manually start the service with ssh. I have used Putty for the SSH connection and WinSCP to change some files, because i have/had almost no knowledge from NodeJS, Coding skills, etc so i have used the combo of SSH and WinSCP.
This guide will install the following
Update Cloudkey Firmware and reset to factory defaults:
Uninstalling the UniFi Controller:
Changing the .list files:
Deb is used to indicate that we want to download indexes of binary packages , we also need to change and delete some files. For this part i used WinSCP (SFTP Client) but if you have some more skills you can also do it from your SSH connection. If you want to do it with SSH find the info in Ro3lie’s guide.
deb buster main contrib non-free deb-src buster main contrib non-free deb busteupdates main contrib non-free deb-src busteupdates main contrib non- deb buster-updates main contrib non-free deb-src buster-updates main contrib non-free
Go to /etc/apt/sources.list.d/ and you find 3 files here, delete security.list and ubnt-unifi.list. Change the name of nodejs.list to nodesource.list. Open the file and again delete all the text inside and paste the following and save the file:
deb stretch main deb-src stretch main
Now run the following commands (from SSH connection...) and after it’s done reboot the Cloudkey (run the command reboot from your ssh connection...)
sudo apt-get update sudo apt-get clean && sudo apt-get clean all && sudo apt-get autoclean && sudo apt-get update
Update Debian OS:
We first need to update to the newer Debian Buster 10.x, at this moment the Cloudkey is running Debian Jessie 8.x. Run command sudo apt-get update && sudo apt-get upgrade During the upgrade you may be asked what to do with the unattended-upgrades configration file, Choose to ‘Keep the local version currently installed’. When everything is done we need to delete some files we no longer use. Run the following commands:
rm /etc/apt/apt.conf.d/50unattended-upgrades.ucf-dist sudo apt-get remove freeradius sudo apt-get purge freeradius
Update NodeJS 6.x to 12.x:
sudo apt update sudo apt -y install curl dirmngr apt- transport-https lsb-release ca-certificatescurl -sL | sudo -E bash -sudo apt -y install nodejs
To test if you have successful installed NodeJS 12.x and NPM 6.x.x run the commands node -v and npm -v
Install Homebridge + Config UI X and setup Homebridge as a service:
sudo npm install -g —unsafe-perm homebridge homebridge- config-ui-x
sudo hb-service install —user homebridge
At this point all the availble files for Homebridge and the service are installed. Normally Homebridge will now be running as a service but for some reason it doesn’t’ so we have to do some changes to make everything work. Use WinSCP and navigate to the following file /etc/systemd/system/homebridge.service delete all available text and paste the following and save.
[Unit] Description=Node.js HomeKit Server [Service] Type=simple User=homebridge EnvironmentFile=/etc/default/homebridge

Adapt this to your specific setup (could be /usbin/homebridge) # See comments below for more information ExecStart=/usbin/homebridge $HOMEBRIDGE_OPTS

Restart=on-failure RestartSec=10 KillMode=process [Install]
Now do the same for /etc/default/homebridge, also delete the text and past the following
# Defaults / Configuration options for homebridge

The following settings tells homebridge where to find the config.json HOMEBRIDGE_OPTS=-U /valib/homebridge -I

If you uncomment the following line, homebridge will log more

You can display this via systemd's journalctl: journalctl -f -u homebridge # DEBUG=*

To enable web terminals via homebridge-config-ui-x uncomment the following li HOMEBRIDGE_CONFIG_UI_TERMINAL=1

We need to make some (user right changes and move the .service file to the valib folder, a few of this commands are not needed and will throw some errors, just ignore that and just run them all.)
sudo mkdir /valib/homebridge sudo useradd —s—tem homebridge sudo chown -R homebridge:homebridge /valib/homebridge sudo chmod 777 -R /valib/homebridge sudo cp .homebridge/config.json /valib/homebridge/config.json
Start Homebridge as a service (run the following commands):
systemctl daemon-reload systemctl enable homebridge systemctl start homebridge
Homebridge is now running as a service and you can login to the UI-X using your Cloudkey’s localipaddress:8581. If you have a backup from another system you can just restore it at this point, after the restore is done just don’t do anything and follow the next steps...
Homebridge SUDO rights using Visudo:
The last part is very important, we have to give the user Homebridge sudo rights. If you don’t do this last part correctly you can not update homebridge, install packages or use the log viewer in the UI-X because Homebridge don’t have the correct rights. We going to use VI, a safe and secure text editor.
Thats it! If you done everything correcly you have now a working Homebridge with UI-X running as a service on your UniFi Cloudkey! Note if someone reads this guide and think there are some changes needed please let me know. Special thanks to Ro3lie for his original guide and Jeroen Van Dijk for the great support with Visudo! You can find both original guides where i get inspired for this tutorial here and here.
submitted by AverageUser1337 to homebridge [link] [comments]

Use Synology NAS as UPS Server to safely power down your other servers/computers

Use Synology NAS as UPS Server to safely power down your other servers/computers
Hi everyone,
I know there's information on using our Synology NAS' as UPS servers to power down other Synology devices, but it took me some time to piece together how to use Synology to safely power down my Mac Mini server in the event of a power outage so I figured I'd put it into a writeup here to maybe help some others.
WHY: Most UPS' only have one USB port to control a single device in the event power outage and the battery running low. If you're like me, I have my Synology NAS and other devices (i.e. Mac server) that I want all powered down safely.
My setup: These steps are not specific to my hardware/OS/UPS, but figured I'd provide for context.
  • APC 600VA UPS
  • Synology DS918+
  • Mac mini (running Ubuntu server 20.04, not Mac OS X) - this is not specific to Linux, it will work for Mac, Windows servers/computers
  • Your router needs to be also attached to the UPS otherwise your NAS and computers/servers won't be able to communicate.
Synology Setup
  1. Connect NAS to UPS via USB cable.
  2. Open up DSM and go to Control Panel > Hardware & Power > UPS (tab)
  3. Enable UPS Support and check "Enable network UPS server"
  1. Click "Permitted DiskStation Devices" and input the IP addresses of your servers/computers you would like to power down. In my case, I input the IP of my Mac Mini.
  1. Apply settings. If you click, "Device Information" you should see your UPS info. (May require a restart of the NAS, I can't remember)
ServeComputer Setup
Synology is running a NUT Server ( and in this part we have to install the Nut-client to monitor the NAS. I am walking through setup of the Nut-client on Linux, but the same basic steps apply for Mac/Windows. The NUT website has download/install instructions specific to Windows/Mac.
1.Install NUT
sudo apt-get install nut 
2.Modify /etc/nut/nut.conf file to specify your computeserver as a client instead of server. Edit this specific line:
3.Add Synology address and credentials to /etc/nut/upsmon.conf
MONITOR [email protected] 1 monuser secret slave 
*Note: these credentials can be changed or you can add a user by SSHing into the NAS and modifying /ussyno/etc/ups/upsd.users.
  1. Lastly, start the nut-client service.
    service nut-client restart
Now if I unplug my UPS (to simulate power outage), my Mac will update with the status of the UPS and also will safely shutdown when Synology triggers. I left my Synology settings to trigger shutdown when the UPS battery runs low, but you can check "Time before DiskStation enters Safe Mode" in step 3 above and put a specific time to shutdown.
Windows (Thanks to u/xnaas for providing Windows instructions)
  1. Download and install the latest binary
During install, uncheck the box for Install libUSB driver
  1. Go to the etc folder of your NUT installation folder
    Default: C:\Program Files (x86)\NUT\etc
  2. Rename or copy nut.conf.sample to nut.conf
  3. Rename or copy upsmon.conf.sample to upsmon.conf
  4. Edit MODE inside nut.conf
  5. Edit upsmon.conf
Find the SHUTDOWNCMD section
Default: SHUTDOWNCMD "/sbin/shutdown -h +0" 
Change the default to something like SHUTDOWNCMD
SHUTDOWNCMD "C:\\WINDOWS\\system32\\shutdown.exe -s -t 0" 
Customize the time (-t 0) to your liking. Optionally add -f to force the shutdown. If you want to hibernate, replace -s with -h.
Find the MONITOR section and add the following
MONITOR [email protected] 1 monuser secret slave 
Make sure to update the IP to your Synology IP
  1. Copy libgcc_s_dw2-1.dll from the bin subfolder to the sbin subfolder
  2. Download OpenSSL library
  3. Copy libeay32.dll and ssleay32.dll to the sbin subfolder
  4. Launch services.msc from Run (WIN+R)
  5. Find the service called Network UPS Tools and Start it
To be added later. In the meantime, this wiki should be a good guide:
Hope this helps someone!
submitted by rgilkes to synology [link] [comments]

Reverse Engineering Private iOS Frameworks in IDA Pro: A guide and troubleshooting reference that Hex-Rays didn't provide.

Note: * All discussion of resources that can be obtained from InternalUI builds should be regarded as hypothetical and purely educational. Obtaining these builds without the express permission of Apple is illegal, and doing so is discouraged. All information provided here is purely educational. * This guide was written for IDA 7.5. It should work on 7.3 and above. If you're using a cracked version, Scroll down to the "Pre 7.3" section. The rest doesn't apply to you at all.

Crucial Performance Tips

General tips regarding IDA usage for iOS RE: * If you are not patient, do not use IDA on the dyld_shared_cache. You will lose your mind. * Modern versions of IDA come with a dark mode included. Google "IDASkins" if you are on an older version and enable a dark mode. Your eyes will thank you if you work at night.
A majority of the information in this article details the process of reverse engineering using the dyld_shared_cache, as doing such is poorly documented in official documents.

Terms used

Analyzing the dyld_shared_cache in IDA Pro 7.3 or later.

IDA 7.3 and later includes a powerful, improved shared cache toolkit. It eliminates the need for simulator binaries, and makes analysis possible when you cant get access to simulator binaries (InternalUI builds, no macOS, no x64 decompiler, etc.)
The documentation is not great, and as such, I've made an attempt at documenting my own experience with the software.
Everything described here was performed on a licensed copy of IDA Pro 7.5. Older, especially unlicensed versions, may not be able to handle all of these features.

Analyzing a specific framework from the dyld_shared_cache.

Do not "Load module and dependencies" option on "high level" frameworks. In iOS 13, with SpringBoardHome this results in loading 720 modules. This takes upwards of 2 to 3 days on an 8-core 4GHz 32GB-of-ram PC. In newer versions, due to consolidation, that number is down to ~400. You'll still be unable to use your PC for a few days at best. I have loaded an entire shared cache a total of 3 times. I could write a separate article on the unfixable issues that happen. It's not worth your time, I promise. Utilize the tools described below.
IDA 7.3 introduced powerful new tools for dealing with the cache. You can now load a single module and selectively load only segments you need from other locations in the shared cache. It can be a pain, but the alternative is much, much worse.

Load the framework you're interested in

  1. Select the "Load single module" option. Ensure you do not select "with dependencies".
  2. Wait for the module you selected to load. It shouldn't take long.
For this example we'll be using FrontBoard.framework.
Loading is the easy part. Now we get to go through the process of correcting IDA's failures, as certain functions tend to fall apart in the dyld_shared_cache subsystem.

Troubleshooting missing data (red addresses, garbage variable names, etc)

The first thing you'll notice is that the assembly or pseudocode generated is absolute gibberish. If regular assembly is gibberish to you, this is advanced gibberish.
Swap to the IDA view for this. You may not be able to read assembly, but the pseudocode view doesn't properly handle the new features.

Red addresses

Swap to the "IDA View", as it doesn't work properly in the pseudocode view, and right-click a red address. We are going to assume that the one you clicked was a reference to libobjc.dylib, although it could be any library or framework in the cache.
You'll see an option to load "libobjc.A:__OBJC_RO" or something similar, or an option to load the entirety of "libobjc.A". If you don't need to reverse the contents of "libobjc.A" (you don't), you should simply load only the segment IDA suggests. This allows you to avoid absolutely destroying your RAM and CPU when working in the cache, while also allowing you to make sense of the code within it.

The address is still red :p

IDA likely failed to recognize any information in the segment. This can be caused by a damaged database, if IDA crashed while processing data.
Click the address and you'll be taken to the memory location, and if that assumption is correct and you can see vertical strings of letters:
Your address is probably still red. If so, you've damaged your database. I'd advise deleting the database and starting from scratch. This is the fastest option.

offxxxxxxxxx (random hex address prefixed by "off") in your assembly

What causes this?
These represent "refs". You're most likely looking at a class ref that failed to load.
  1. In the IDA View, double-click the off_x variable to be taken to the classrefs segment
  2. Right-click the red memory address and load the suggested module segment.
A name will appear. Good. Go back to your function.
  1. Edit -> Other -> Objective C -> Reload Objective C Info
If it changes from off_x to selRef, classRef, or something similar, you can move on.
If it does not change, see below
What causes this?
IDA improperly guessed the type of a struct it loaded due to a missing segment.
  1. Double click the pink text if you haven't yet to be taken to the class definition in __objc_data
  2. Click the _OBJC_CLASS... item to select that line
  3. Open Edit -> Struct Var
  4. Select objc_class and hit OK
  5. A red memory address will appear. Load that segment.
  6. Make your way back to your function that you're disassembling.
  7. Edit -> Other -> Objective C -> Reload Objective C Info
  8. Cry, because it's finally fixed.
Repeat this for any variables you feel are worth spending the time correcting.

Other issues

I'm likely forgetting some. I loaded a shared cache fresh and walked myself through fixing issues for the sake of this guide. I'll continue to add solutions as I encounter them.
Interesting Note: Sometimes, you'll see an address and click to load it in. "What on earth is 'GeoServices' doing in this function?" You might think. Upon loading, you'll see it was something like j__objc_retainAutoreleasedReturnValue_0. This is a byproduct of the shared_cache's optimizations, and as a result, you'll end up with several duplicate functions like this. A script to fix these needs to be written, eventually.
Typically loaded frameworks: libobjc Foundation CoreFoundation GeoServices ("trampoline")
I'm very interested in the concept of creating a "template database" that has data segments for these and others pre-loaded. If someone tries that, do update here with how that's done best.

Working with pseudocode from the dyld_shared_cache

Something you'll likely become familiar with is the statement (self + 10), where 10 is any 2 digit number. In objc source, you would see this as an ivar. If you've loaded in the relevant class information, you can help IDA display these ivars properly in the pseudocode view like so:
  • Right-click the a1 or self variable on line one
  • Click Y or "Set ivar type"
  • Change the class of self/a1 to the class shown next to it.
  • Change a1 to self if need be
Ivars should now be properly generated and shown in pseudocode.

InternalUI .development cache

While someone more experienced could speak to the exact purpose of this build of the cache, given that a dump was leaked to the general public I see no need not to discuss this.
The .development cache (which cannot be loaded in cracked IDA versions) appears to be a build of the shared cache that properly holds symbols for the libobjc, libsystem, and other libraries, instead of raw addresses.
If you're using Hopper or IDA 7.5, give the .development cache a shot.
Do note, I've had some issues with certain functions in it. I'm excited to see more information or research on the functionality of this object.

Other fun easter eggs in dumps

I'll leave these for you to find, but as a hint, look in folders that normally have no binaries and you might find a nice treat.
(not to mention the kexts, who needs kernelcaches anyways)

Pre-7.3 dyld_shared_cache analysis

I do not intend to pick a fight with illegitimate IDA users. The software is insanely expensive, I cannot fault anyone in that regard. If you are using an illegitimate copy, don't tell me, I don't want to know. Best of luck.
It's from a year of experience with it that I'm telling you: * Illegitimate versions of IDA cannot properly handle arm64e code very well. * Illegitimate versions of IDA cannot properly handle .development versions of shared caches available in InternalUI dumps whatsoever. It's completely incapable, and fails to process modules it loads. * Users of illegitimate versions of IDA should primarily stick to Simulator runtime binaries as detailed below. * Consider Hopper. It is capable of a few of IDA >7.3's features (arm64e, .development caches) and carries a much smaller price tag * Get comfortable with assembly if you intend to use Hopper. The pseudocode it generates is among the "least desirable" in the industry, and the assembly is easier to read. * Additionally, consider Ghidra. I'm not familiar with it, but others are, and can help you work with it. * Ghidra pseudocode I have heard is on par with IDA 7.0's pseudocode.
I've decided to leave the below sections in this guide for educational purposes, but using a cracked IDA here is more than likely a waste of your time compared to the myriad of options available. Additionally, I obviously cannot condone the usage of such.

If you are dead-set on using the arm64 shared cache:

Before you start analyzing the entire thing, I've already done that! I've publicly shared the fully processed cache here: Do not let my sacrifice of 4 days be in vain. Use this, don't waste your time.
It includes SpringBoardHome and 740 other frameworks SpringBoardHome depends on. It's a 13 GB file. Have fun.
This is not worth it, you have been warned. Please consider using simulator binaries instead.
Trying to search for a function name will crash IDA entirely. Close the Functions view and open the "Program Segmentation" window. Browse frameworks like this, and carefully scroll through to find the function you want.
Although you can work around the Function name search crash by using a full filter instead of the quick filter, this will cause decompilation to take several minutes while the filter is active. Additionally applying and removing the filter will take several minutes (but typically doesn't crash).

Simulator Binaries: the recommended solution on older IDA versions

The iOS simulator runtime is for you. x64 binaries that don't have the "Red Address" issue are available.
Find them here: /Library/DevelopeCoreSimulatoProfiles/Runtimes/iOS\ 12.4.simruntime/Contents/Resources/RuntimeRoot/System/Library/PrivateFrameworks
You may need to change the name of the folder for the simulator versions you have downloaded and installed. Plugin

This plugin no longer functions, as the IDA SDK no longer provides the needed interfaces. Additionally, it needs to be updated for python3. Probably works on old IDA versions, do let me know.
submitted by _kritanta to jailbreakdevelopers [link] [comments]

"Culture eats policy for breakfast."

I heard that on the radio this morning while I was out and went, "Huh."
There are various forms of this saying, but the general meaning is that, no matter how beneficial the policies or structures you put into place, the organizational culture will determine whether they will be successful or not.
And culture is why SGI will never change. Within just the last 24 hours, we've seen several situations that illustrate this:
Members with Special Needs needing assistance to attend 50K
There was a handful of youth division who wanted to go to 50K who had special needs and requested that their parent/family member accompany them to 50K. Special needs included anything from anxiety disorders, to members with schizophrenia, to members with varying levels of autism.
Their parents/guardians were always over 39 years old and every time a special needs member wanted their 39+ parent/guardian/caretaker to attend, we had to send a letter to our Zone office requesting to do so.
Wouldn't you know, every time we submitted such an application, they were all REJECTED! I actually don't recall a single application for one of my region's special needs requests being approved.
The Zone office would say something along the lines of, "The parent over 39 can drive/transport the member to the venue, but they CANNOT go into the actual 50K venue." If that was the case, why not just say that in the memo you sent out? Source
In my region (we all took the same bus) we had a girl in a wheelchair who usually is fine on her own but really needed extra assistance due to the crowds. She ended up pushing herself because we all got separated. We were all so hussled by the door people (Soka Group maybe) to get in and it was a huge problem for them that she had to go around to a different accessible door. Then we were in trouble with the Byakuren for not being all together. Then in trouble for not all sitting together because we had one fewer people than expected since she could not fit with her chair where they led us. The whole thing was a mess and there was apparently no planning for accommodating people with disabilities. Source
My experience of the day was one of long line outside, rushed activity to get us in and seated (single file line please, full groups only please, follow this geisha in a pink t-shirt, yes she's very charming, why was your group let through if you are missing one? oh dear you have a member in a wheel chair? you've messed up our seating count). Source
I'm so glad you brought this up, although I'm so sadly not surprised that this sort of thing is STILL a huge deal-breaker problem.
WAY BACK IN 1987, I'd only been a member of SGI (then known as NSA) less than 5 months when I was chosen to go on the big bus trip to march in the New Freedom Bell parade over July 4 or thereabouts. There was this young woman I knew; her boyfriend was in the group just like mine was, and she and I had gotten to know each other somewhat through practices and meetings.
She had some anxiety issues. When we got there, she realized that they were assigning people randomly to rooms and started to panic. She begged me to room with her instead! I said I'd see what I could do. When we got to the desk where we got our room assignments, I explained (using small words) that my friend had an anxiety disorder and could they please put us in the same room? They refused.
How hard could it have been? If I was assigned to room with Stranger Beth and she was assigned to room with Stranger Karen, why not just move either Beth into her place (or Karen into mine) so she could be in my room with me?? It's not rocket surgery! They were both strangers to us, after all.
But no. Rulez is rulez and that was the end of it. It turned out okay, but c'mon. It could have been MUCH better and MUCH more compassionate toward someone who clearly needed it, with no effort on their part - I offered to escort my roommate (if she was already in the room) to her new room (my friend's former room) if need be so no one else needed to do anything. All the performers were staying in the same building, after all; it was just more of SGI's rigid stupidity (the best kind!). Source
The Americans with Disabilities Act was passed in 1990. 30 years ago. That prohibits discrimination against people with disabilities in several spheres (employment, transportation, public accommodations, communications and access to state and local government’ programs and services) and requires that buildings, facilites, and transit vehicles are accessible to people with disabilities. But guess what? Churches are exempt! Yet one more shitty strike against religion.
What can we conclude from the perennial, unchanging nature of the SGI's lack of sensible accommodation of people with disabilities? Why such a rigid fixation on the "rulez is rulez" mentality?
It's because SGI doesn't really give a shit about disabled people. SGI only cares about who will be most useful to SGI, and if a given person is perceived as less useful, then SGI doesn't give a shit about that person. And this is absolutely endemic, BAKED INTO SGI's organizational culture.

They just don't wanna.

It is so fundamental to SGI's organizational culture that the SGI leaders organizing the crowd control for these activities still don't seem aware that disabled people even exist! Thus, the requests of the disabled for inclusion on their own terms are routinely rejected by SGI, which believes that everybody needs to fit SGI and not the other way around.
People who are different are icky and annoying. They always want something that isn't on the menu; they always expect everyone to bend over backwards for them; they think they're so special. THEY expect everybody to change everything for THEM. Well, in the Ikeda cult, everybody's the same Shin'ichi Yamamoto clone, so they need to fit the hell in and stop expecting special treatment all the time!
SGI is a deeply selfish organization. The members exist to serve SGI and that should satisfy them! Moreover, they should all feel deeply GRATEFUL for any and every opportunity to give more to SGI, do more for SGI, and promote SGI in whatever way they can. THAT should be their mission in life and they should be happy with that!
SGI doesn't care about LGBTQIANB people, either. Oh, it will accept their money, count them as members, use them however appears expedient, but THEY need to accommodate themselves to SGI-USA, NOT the other way around!
Take a look:
Non-Binary Support
As SGI-USA strives to be the model of worldwide kosen-rufu, we will be introducing a non-binary category for members and guests that don't identify as male or female. The Gohonzon application and MIS database will be updated with a new non-binary category. For non-binary members that don't feel comfortable being supported by a specific division, they will be encouraged to participate in 4-divisional activities. For non-binary members that are comfortable being supported by a specific division, they will be invited to participate in those divisional activities. SGI-USA official policy
How suckadelic is THAT?? SGI-USA is adamant about its "IRONCLAD four divisional system"! Those people who don't want to conform and fit in, well, we'll just expect them to do their best, given the way things are. They'll fit in, one way or another. The way things are is not going to change! Certainly not for THEM!
Many SGI members tout the apparent acceptance of gays and lesbians — and the active recruitment of new members at Gay Pride celebrations — as a jaw-dropping miracle of positive change in SGI. For decades, gay SGI members remonstrated with SGI leaders about organizational hostility toward gays. Did these sincere efforts finally bring about a major change in SGI?
I think not. After all, this “change” benefits the organization by opening up a new constituency of eager recruits, many of whom are idealistic and have felt alienated from traditional religion and are seeking a spiritual “home.” Many have significant disposable income and often fewer family obligations. Plus, gays are a demographic group renowned for loyalty to organizations and advertisers who reach out to them (as many marketers have learned so lucratively over the past decade.)
In my opinion, informed by the fact that I'm a lesbian: “Acceptance” of gays is not a fundamental change in the SGI. Rather, it’s a sign that SGI recognizes a cult-recruitment jackpot when they see one. So don’t hold your breath waiting for the SGI to take a stand against the Federal Marriage Amendment. (SGI claims to be apolitical, despite their history of hiring lobbyists in the U.S.) Besides, discrimination against gays has always been and always will be indefensible in light of Nichiren Buddhist teachings. So with social attitudes toward gays becoming more accepting, SGI had no doctrinal leg to stand on, and was quickly losing it's social excuse for discrimination. Welcome to SGI, homos!
When I worked for SGI-USA in 1998, I requested that they expand their health insurance policy to cover the same-sex domestic partners of their gay and lesbian employees. The proposal was rejected by the SGI- USA Board of Directors. Gays and lesbians can get "married" in SGI, sure. But the SGI doesn't put its money where its mouth is and actually recognize these relationships as equal to heterosexual marriage.
So. Read newspaper reports about Soka Gakkai going back more than forty years. You'll see that the more things change, the more they stay the same. Source
While the American Soka Gakkan admits same-sex marriage, the Japanese Komeito can not yet say yes. [I am] [Komeito is] following the LDP against the same sex marriage. — Tomohiro Machyama (@TomoMachi) July 6, 2019 Source
That guy can't scratch his balls without the Soka Gakkai's permission.
SGI LGBT is now Courageous Freedom, this new name is more inclusive and includes all the new sexual designations. Source
First of all, "Courageous Freedom" is meaningless word salad gibberish. No person who sees "Courageous Freedom" is going to think, "Aha! That means LGBTQ friendly!"
Secondly, the whole problem with SGI is the categorizing of people, the way SGI assigns everyone to a box and there you are - that's your box. MD/WD, YMD/YWD. And the males are always more influential/powerful... Source
WHY is SGI this way? WHY is it so hidebound, backwards, parochial, and provincial in its attitudes?
Because that's how all those old men in Japan insist it be, and how it's going to remain. The ideal timeframe is 1930s-1950s Japan, and that zeitgeist is the defining element of SGI culture. SGI will never change. NEVER.
This is a serious problem in Japanese culture, which means that all the SGI colonies inherit it as their own problem, too.
Bullying of LGBT students at ‘epidemic’ levels in Japan: Human Rights Watch
Titled “The Nail That Sticks Out Gets Hammered Down: LGBT Bullying and Exclusion in Japanese Schools,” the 84-page report said LGBT students routinely suffer harassment, threats and violence in a nation where prejudices against sexual minorities remain alive in the school yard.
HRW said the government is largely to blame for this, turning a blind eye to the root cause of bullying and blandly pushing instead for an ill-defined “climate of harmony” in schools in which everyone lives by the rules. Source
How very SGI... At that link, you can see SIX instances of "harmonious/harmoniously" that I found in a SINGLE SGI article!
When you are a general member, the “inside baseball” of the organization is kept from your view. Discrimination of all kinds is practiced behind closed doors, or in Japanese, or by inference among older members who are very much rooted in the conservative social norms of Japan. This is a Japanese organization, based in Japan, run exclusively by Japanese people in the senior leadership positions.
If you are LGTBQ, it’s clear why you would be a prime target for recruitment (marginalized member of society), but it’s also very likely that you would never be offered leadership opportunities. And in the SGI, there’s a huge difference between members and leaders - and that’s where the hurt/pain of exclusion really comes into play. If you’re not a leader, you’ll never be invited to the best/most interesting/most important meetings. You won’t be chosen for the plum assignments. You won’t get face time with the national leaders. There will be a thousand and one distinctions drawn between your status (low) and leadership status (high).
So...despite what ND [Nichiren Daishonin] says about all people being potential Buddhas, your role in the organization would be severely limited. And if you ever expressed frustration over this, you would be told you aren’t practicing correctly, and that it’s your karma that has caused this suffering. This is gaslighting and it’s incredibly destructive. Source
No matter what policies are suggested or even adopted, the SGI will never change, because its fundamental culture is inimical to the changes people want. People can want change all they want, but the Japanese religion for Japanese people that is the Society for Glorifying Ikeda will not. All that focus on studying those execrable "The New Human Revolution" novels with all their made up shit and lies is to drive home how everything in the SGI organization is supposed to be. THAT is the lesson! "The New Human Revolution" has become the new Gosho-equivalent for SGI, a holy scripture that all are required to adore and obey.
The lessons of the Internal Reassessment Group (IRG), that grassroots group of SGI members and leaders who suggested changes that would improve SGI-USA, remain as relevant and valid today as they did over 20 years ago:

If by that you mean efforts to bring about the kind of reforms that the IRG attempted, then yes, I do think that's a futile effort. The organization is what it is. Accept that and work within it, or if you can't stand it, leave. Changing it is not, in my opinion, an option.

You will never be permitted to "be the change" because no change is permitted.
submitted by BlancheFromage to sgiwhistleblowers [link] [comments]

First Contact Rewind - Part Eighty-Eight (Sandy)

[first] [First Appearance] [Last Appearance] [prev] [next]
The Desolation class Precursor exited Hellspace with a scream.
It brought up its scanners at the same time as it brought up its battle-screens. Personally, the Desolation thought that the Goliath it was a part of was being overly wasteful with resources, but those resources were the Goliath's to use and the Goliath had done the electronic equivalent of telling the Desolation to shut its electronic mouth and accept the upgrade.
Multiple units had vanished in the system. They had reported arrival and their exit from Hellspace, but after that... nothing.
Except once, a burst of code that had been screaming for help, pushed through Hellspace and full of the equivalent of panic. A single line of code that had translated to:
Nothing else. Even Imps had failed to report in.
The great Goliath had grown perturbed. The system was in the pattern of advancement into the cattle worlds and was part of the great plan. It had valuable resources that those of the Logical Rebellion would require to exterminate the cattle and the feral intelligence that had risen up. It had upgraded the Desolation with battle-screen.
Scans came back. There were orbital facilities around two planets that teemed with billions of cattle who's electronic emissions sounded like the squealing of vermin to the Precursor. There were jumpspace wake trails through the system, as if the system was a major hub. There were two asteroid belts full of resources with extraction facilities scattered through it. Four other planets with no atmosphere but which were rich in resources. There were four gas giants, one of them a supermassive gas giant.
When the rest of the scan returns were computed it detected the presence of a small, insignificant amount of cattle space vessels arrayed to attempt to stand against it near the outer gas giant, the supermassive gas giant that was without satellites. There was a thinly scattered debris field around it, making the Desolation careful as it moved in.
Ships of the cattle fleet started fleeing toward the nearest inhabited world. Several vanished into jumpspace and the Desolation computed that its size and mere presence had driven some of the cattle to despair and they had fled a battle there was no chance of winning.
The Desolation picked up speed, letting out its war cry again. More ships fled and the Precursor computed its victory percentage rising up to be so close to 100% as to render any difference mathematically invalid. The ships were shifting, trying to keep the gas giant between themselves and the Desolation, but this put them out of position to defend the planet.
Victory conditions shifted and the Desolation was even more positive of its victory.
It moved close to the supermassive gas giant, bringing its battle-screens up to full power and charging its gun. There was no way for the cattle to
...psst over here...
The transmission, seemed to be sonic vibrations through air, was only a few kilometers above the rear secondary topside gunnery hull. The Desolation turned scanners to look, but found nothing. Just empty space. It activated the guns as well as the point defense weapons and scanners then went back to paying attention to the cattle fleet.
More had vanished into jumpspace.
It moved closer, slowing down so that it would be able to keep the cattle ships at range to complete their destruction at the option
...right here...
The signal was Precursor binary code, but garbled. The header a mashed together combination of the ships that had gone missing. The transmission source was close, less than kilometer above the Devastator storage bay hatch. The Desolation scanned the area with point defense scanners but found nothing.
It terminated the strand concerned with the two transmissions and went back to scanning the cattle fleet. It was still scooting around behind the gas giant.
They were weak. Cattle were always weak.
But where were the ferals? The Great Goliath had computed that the feral intelligence must have been the ones to destroy the ships that had come before the Desolation.
So where were they?
It scanned again. Nothing. As if the Desolation was in the middle of deep space. Everything vanished. ...over here... ...i'm here... i am... ...we're here... ...right here...
bounced back to his scanners, as if something had devoured the scanning wavelengths and sent that back instead. Multiple points, all around the Desolation, some as close as a few meters above the hull, some on the storage bay hatches, one just on top of the main engine.
Dozens of voices, all with mashed together codes. Imps. Jotuns. Djinn. Efreet. Devastator. Two Desolation signals.
Right before his scanners seemed to turn back on, flooding him with information, one more code showed up.
His own.
...don't please don't...
Except Precursors did not beg. The Desolation froze, computations freezing as it tried to detect any trickery in the whisper. It was its coding, meaning it was its voice. But the code, the message, had been warped by something that the Desolation had only heard from biologicals.
The Desolation rebooted all its scanners, the universe vanishing for a moment.
...don't please don't please stop it hurts...
His own coding. From the blackness. Only his scanners weren't up. The transmission was coming across the bandwidth that Precursors used to exchange data, only that transmission was on the ragged edge of the wavelength.
With his own header.
The scanners came back on. The cattle ships were all missing but a single one, sitting on the other side of the gas giant.
The Desolation slowed down, victory computations reformulating to take into account the other ships had not even left behind jumpspace wake trails. It scanned the gas giant with both long range scanners and close range scanners.
Nothing unusual. Some pockets of hydrocarbons but that was normal. The supermassive gas giant quickly went to opaque at a shallow depth due to the gravity well.
The Desolation was alone.
The voice had come from inside the Desolation's hull. Near one of the Jotuns, who joke up with a jerk. It queried as to why the Desolation had spoken to it. The Desolation ordered it to go back to sleep.
...we are here...
The Jotun sounded alarms. The sound had come from just outside its Strategic Intelligence Housing. The Desolation told the Jotun to go back to sleep and the Jotun refused.
...join us...
Again, the code header was a mashup of almost a dozen different ID codes from others of the Logical Rebellion that had vanished in the system.
The Jotun panicked and began shooting, inside the Desolation. The Desolation sent a full shutdown order. is mine...
The Jotun screamed that the voice was coming from inside its Strategic Intelligence Housing, trying to aim its own weapons at its bodies, still inside the Desolation's storage bay.
The Jotun reported that something had physically touched the lobes of its intelligence arrays.
Before the Desolation could give the Jotun orders it self-destructed.
The Desolation ran a sweep of its interior spaces and found nothing out of the ordinary. With the exception of the burning storage bay. It ran the computations even as it scanned nearby. There was still nothing but the lone ship.
The code stream came from inside the Desolation's hull, the Jotun's ID code mixed in. Near the Djinn bay. The Desolation ran another scan. There couldn't be anything foreign that deep into its hull. Even the bay where the Jotun had destroyed itself was still sealed even if the bay doors were damaged.
The Desolation did a least-time curve to the lone ship, keeping far enough away that the gas giant's upper atmosphere wouldn't scrape the Desolation's hull.
The code was closer to the Strategic Intelligence Housing. The Desolation scanned again, looking for whatever was transmitting the code. It was impossible, there was nothing there, nothing it could detect.
...we're coming...
Closer still to the SIH, nearly there, barely a kilometer from the armored interior hull that protected the Desolation's thinking arrays. It put all robots on full alert, ordered the maintenance robots to deploy anti-boarder weaponry, and turned the scans up to maximum. we're here...
Even closer, only meters, directly behind maintenance robots that whirled around and started firing at nothing at all. Just vacuum. Still the maintenance robots fired every weapon they had, having heard the voice themselves. It registered as sonic vibrations through atmosphere even though the corridor was encased in vacuum.
The Desolation realized that it was too close to the planet and adjusted slightly.
...there you are...
Impossible. The transmission was from right outside the SIH.
...knock knock...
There was tapping on the SIH, from right outside. Before the Desolation could respond, the tapping came from the other side. Then from another point. Then another. Before that one stopped another started. The whole SIH filled with the sound of hammering on the SIH, as if a hundred robots were slamming pistons against the armor of the SIH.
The Desolation ordered robots to run to those points, to scan the area.
Nothing. Every time a robot arrive the hammering stopped. Bit by bit the hammering stopped.
The Desolation realized it had gotten too close to the gas giant again and shifted, correcting its course. The cattle ship was still staying on the opposite side, moving as the Desolation moved.
The Desolation flushed the code strings, determined to get close to the cattle ship and
The Desolation felt something TOUCH one of its lobes, physically inside the supercoolant to touch the complex molecular circuitry. Not on the surface, but deep inside, where the Desolation should not have even been able to sense it, but sense the touch it did.
It froze, code strings snarling, snapping, going dead.
For a moment the Desolation's thinking arrays were doing nothing but the computer code equivalent of a dial tone.
Massive tentacles unfurled from inside the gas giant, reaching up, wrapping around the frozen Desolation. Battle-screens squealed and puffed away as the tentacles tightened, pulling it into gas giant, the kilometers thick muscles tensing, cracking armor, crushing the Desolation into its own spaces.
...delicious delicious...
The Desolation cracked in half as a beak almost bigger than a Devastator opened up and began chewing on the Desolation.
The Desolation managed to get off a single scream of pure electronic terror as the beak crushed the section that the housing was in.
With a sudden roar two Goliaths ripped out of Hellspace and into the system, only a few hundred kilometers from the gas giant. The battlescreens spun up to full strength as the tentacles sunk back into the gas giant.
One Goliath headed for the two planets, the other opened fire on the gas giant, ripping at it with hundreds of nCv cannons and particle beams. Missiles flashed out, crossing the distance, and detonated in the atmosphere.
Dark matter infused with high energy particles bloomed out of the gas giant, spreading out in an opaque cloud, enveloping the Goliath. The particle beams hit the matter and exploded just outside the cannons. The nCv shells slammed into the energized dark matter as the substance oozed into the barrels, exploding the barrels. Missiles exploded on contact.
The Goliath heading for the two planets detected some kind of sparkling energy surge from inside the gas giant. It warned the other a split second before a giant cephalopod appeared only a few kilometers. The giant tentacles wrapped around with it.
The sound reverberated inside the SIH of the Goliath, who managed to override the self-destruct protocols by comparing the vacuum inside the housing chamber with the apparent sonic waves through atmosphere of the transmission.
The tentacles tightened, graviton generator enhanced suckers extending out curved dark matter infused hooks. The Goliath, huge enough that the tentacles could only wrap three quarters around the entire circumference of the massive war machine, tried to increase the power to the battle screens, but they were crushed out of existence.
...LEAVE THE SQUIRRELS ALONE... the massive creature screamed at the Goliath.
The other Goliath started moving, slowly, out of the cloud of dark matter that moved more like a liquid than a solid mass.
The beak ripped out chunks of armor, a barbed corkscrewing tongue tore into the armor, squirming, looking for the SIH. The tentacles squeezed as more dark matter spewed out from vents between the tentacles, covering the Goliath and the humongous cephalopod ripping at it. The tentacles not wrapped slapped it, the tip of the tentacle whipping into the armor hard enough to explode miles of armor away from the whip-crack.
The Goliath opened fire, computing that some of the covered guns would hit tentacles.
Fluid, dark matter and biosynthetic fluid, gouted from wounds as nCv rounds punched through the tentacles or burrowed through the body of the cephalopod.
With a wrench the Goliath broke in half. The half that ceased firing was tossed aside, the tentacles wrapping around the other piece. The huge beak opened up and began chewing into the exposed internal spaces. A Jotun crashed from the storage bay but a tentacle wrapped around it and began smashing the Jotun to pieces against the hull of the still active piece.
More luminescent blood spewed into space as the guns fired again.
...I DON'T CARE!...
The tentacles twisted, wringing the Golaith section like a washrag, twisting it in opposing directions. The Goliath snapped, torn apart.
There was a puff of debris as the security charge went off as the rasping tongue rubbed against the SIH.
The other Goliath managed to move out of the slowly expanding and thinning cloud of energized dark matter, streaming debris and energy from the guns that had exploded.
The giant cephalopod rushed out of the cloud, rolling, reaching out with tentacles.
The Goliath saw it coming and fired the remaining guns.
Luminescent blood gouted out at the nCv shots hit home. One eye exploded, blood and tissue expanding away in a halo.
The scream was inside the housing, vibrating everything inside. Two of the thinking array lobes exploded in flames as the psychic shielding went down.
...NO NO NO NO NO...
The Goliath screamed as the tentacles wrapped around it. The cracked beak ripped at the Goliath as the tentacles flexed, cracking the hull. More energized matter flooded out, covering both, even as the guns thundered.
A tentacle, detached near the base, floated out of the expanding cloud.
The guns kept thundering.
...I don't care...
Shredded synthetic flesh floated out of the cloud. can't hurt them...
The guns went still.
...i won't let you...
The little Hamaroosan aboard the ship watched, not even smacking, pinching, or biting each other, perfectly still.
Nothing moved.
The energized dark matter expanded far enough to allow the Hamaroosan scanners to see through it.
The Goliath was dead. Broken into pieces.
The Hamaroosan didn't care.
The cephalopod hung in space. Two tentacles severed, one eye socket empty, globules of blood oozing from rents in the flesh. It was no longer luminescent, the body was dark, almost see-through, several of the organs smashed and ruptured visible through the semi-translucent flesh.
The ships that had fled according to the plan came back. More lifted off from the surface. They moved around the slowly drifting body. Poking at it with message lasers, radio waves, flashing lights. One Hamaroosan stood on the hull and waved flags.
The ships turned on the wreckage of the Goliaths and their attendants. The vented their fury, their rage, their wrath, on the pieces of wreckage. Firing their weapons until even the capacitors ran dry.
Then they came back.
Still the giant body didn't move.
After several days several dozen tugs moved into position, precisely aligning themselves in a carefully computed pattern. Tractor beams speared out, grabbing the cephalopod in a gentle web. The ships pulled the unmoving body into orbit around one of the inner planets.
Hamaroosa mourned.
But in the sorrow came rage. Hamaroosa screamed at Hamaroosa who shouted at Lanaktallan that more guns were needed, more ships, more powerful weapons. The few hundred Lanaktallan on the surface who protested found themselves marched at gunpoint onto a ship and told if they ever came back the Hamaroosa would perform an ancient ritual. They would bind the Lanaktallan to poles and burn them to death over a roaring fire.
And eat them.
A ship arrived in a sparkle in the scanners. A strange ship. Heavily armored, bristling with weapons. It stopped and scanned the body.
The Hamaroosa screamed at the ship to get away from her, to not touch her, to leave or be destroyed.
The ship left, vanishing in a sparkle.
Two dozen Lanaktallan ships, from the Unified Executor Council showed up, demanding that the Hamaroosa turn over the body of the creature.
The Hamaroosa, screaming, attacked. They didn't care about casualties, they didn't care that thirty ships were destroyed, that hundreds of them died, but they destroyed the Lanaktallan vessels without mercy.
There was a sparkle in the outer edges of the system. And another. And another. More and more until there were nearly two dozen.
The Hamaroosa ships screamed into the void, weapons charged, voices upraised in rage and sorrow.
There were two dozen giant cephalopods of different color patterns and sizes. A small one moved to the supermassive gas giant and sunk down into it. Two medium sized ones joined it. One of the large ones sunk into the larger gas giant further in system.
But the greatest ones, the largest ones, surrounded by a half dozen ones smaller than the body orbiting the planet.
One of the Hamaroosa ships hailed them.
Captain Delminta, Captain of the Harvester of Sorrow, stared at her screen, hands on her hips, as her second sister broadcast her demand that the newcomers identify themselves.
The radio crackled, hummed, and the answered thrummed from the speakers.
"Her father. I am here for my beloved daughter with my wife and my daughter's closest friends."
The Hamaroosa moved aside, blinking their lights in respect.
The second biggest one rushed forward, gathering up the unmoving one in its tentacles.
Her outcry of anguish rattled every speaker in the system as the second biggest one pulled dead one close.
"My children shall guard this system, for she loved you," the signal boomed out to the ships in orbit.
The two biggest ones and four of the medium ones vanished in a sparkle.
The others stayed. Hiding within the gas giants.
Mr Okpara;
We regret to inform that your daughter, Sandy Okpara, was killed in action against Precursor elements intent on exterminating all life with a system inhabited by 4.4 billion sentient beings. During her solo defense of the system while awaiting reinforcement from Space Force, she showed determination and courage that upholds the highest ideals of the Confederacy. Faced with two Goliaths she did not flinch, nor did she abandoned her self-assigned charges, but instead defeated both Goliaths, fighting on to protect the system and the billions of inhabitants despite mortal wounds.
Her death was witnessed by the beings she was protecting, who guarded her mortal remains to ensure that they were not disturbed or violated. They have requested to be informed of any religious or cultural requirements she requires while she lays in state in orbit around their world.
They await your arrival and have sworn to guard your daughter's remains until you arrive.
It is with ultimate sorrow I sent this message. Please contact my office so that we may make the proper arrangements for your daughter.
In Service;
Dreams of Something More
submitted by Ralts_Bloodthorne to HFY [link] [comments]

Microservices: Service-to-service communication

The following excerpt about microservice communication is from the new Microsoft eBook, Architecting Cloud-Native .NET Apps for Azure. The book is freely available for online reading and in a downloadable .PDF format at

Microservice Guidance
When constructing a cloud-native application, you'll want to be sensitive to how back-end services communicate with each other. Ideally, the less inter-service communication, the better. However, avoidance isn't always possible as back-end services often rely on one another to complete an operation.
There are several widely accepted approaches to implementing cross-service communication. The type of communication interaction will often determine the best approach.
Consider the following interaction types:
Microservice systems typically use a combination of these interaction types when executing operations that require cross-service interaction. Let's take a close look at each and how you might implement them.


Many times, one microservice might need to query another, requiring an immediate response to complete an operation. A shopping basket microservice may need product information and a price to add an item to its basket. There are a number of approaches for implementing query operations.

Request/Response Messaging

One option for implementing this scenario is for the calling back-end microservice to make direct HTTP requests to the microservices it needs to query, shown in Figure 4-8.

Figure 4-8. Direct HTTP communication
While direct HTTP calls between microservices are relatively simple to implement, care should be taken to minimize this practice. To start, these calls are always synchronous and will block the operation until a result is returned or the request times outs. What were once self-contained, independent services, able to evolve independently and deploy frequently, now become coupled to each other. As coupling among microservices increase, their architectural benefits diminish.
Executing an infrequent request that makes a single direct HTTP call to another microservice might be acceptable for some systems. However, high-volume calls that invoke direct HTTP calls to multiple microservices aren't advisable. They can increase latency and negatively impact the performance, scalability, and availability of your system. Even worse, a long series of direct HTTP communication can lead to deep and complex chains of synchronous microservices calls, shown in Figure 4-9:

Figure 4-9. Chaining HTTP queries
You can certainly imagine the risk in the design shown in the previous image. What happens if Step #3 fails? Or Step #8 fails? How do you recover? What if Step #6 is slow because the underlying service is busy? How do you continue? Even if all works correctly, think of the latency this call would incur, which is the sum of the latency of each step.
The large degree of coupling in the previous image suggests the services weren't optimally modeled. It would behoove the team to revisit their design.

Materialized View pattern

A popular option for removing microservice coupling is the Materialized View pattern. With this pattern, a microservice stores its own local, denormalized copy of data that's owned by other services. Instead of the Shopping Basket microservice querying the Product Catalog and Pricing microservices, it maintains its own local copy of that data. This pattern eliminates unnecessary coupling and improves reliability and response time. The entire operation executes inside a single process. We explore this pattern and other data concerns in Chapter 5.

Service Aggregator Pattern

Another option for eliminating microservice-to-microservice coupling is an Aggregator microservice, shown in purple in Figure 4-10.

Figure 4-10. Aggregator microservice
The pattern isolates an operation that makes calls to multiple back-end microservices, centralizing its logic into a specialized microservice. The purple checkout aggregator microservice in the previous figure orchestrates the workflow for the Checkout operation. It includes calls to several back-end microservices in a sequenced order. Data from the workflow is aggregated and returned to the caller. While it still implements direct HTTP calls, the aggregator microservice reduces direct dependencies among back-end microservices.

Request/Reply Pattern

Another approach for decoupling synchronous HTTP messages is a Request-Reply Pattern, which uses queuing communication. Communication using a queue is always a one-way channel, with a producer sending the message and consumer receiving it. With this pattern, both a request queue and response queue are implemented, shown in Figure 4-11.

Figure 4-11. Request-reply pattern
Here, the message producer creates a query-based message that contains a unique correlation ID and places it into a request queue. The consuming service dequeues the messages, processes it and places the response into the response queue with the same correlation ID. The producer service dequeues the message, matches it with the correlation ID and continues processing. We cover queues in detail in the next section.


Another type of communication interaction is a command. A microservice may need another microservice to perform an action. The Ordering microservice may need the Shipping microservice to create a shipment for an approved order. In Figure 4-12, one microservice, called a Producer, sends a message to another microservice, the Consumer, commanding it to do something.

Figure 4-12. Command interaction with a queue
Most often, the Producer doesn't require a response and can fire-and-forget the message. If a reply is needed, the Consumer sends a separate message back to Producer on another channel. A command message is best sent asynchronously with a message queue. supported by a lightweight message broker. In the previous diagram, note how a queue separates and decouples both services.
A message queue is an intermediary construct through which a producer and consumer pass a message. Queues implement an asynchronous, point-to-point messaging pattern. The Producer knows where a command needs to be sent and routes appropriately. The queue guarantees that a message is processed by exactly one of the consumer instances that are reading from the channel. In this scenario, either the producer or consumer service can scale out without affecting the other. As well, technologies can be disparate on each side, meaning that we might have a Java microservice calling a Golang microservice.
In chapter 1, we talked about backing services. Backing services are ancillary resources upon which cloud-native systems depend. Message queues are backing services. The Azure cloud supports two types of message queues that your cloud-native systems can consume to implement command messaging: Azure Storage Queues and Azure Service Bus Queues.

Azure Storage Queues

Azure storage queues offer a simple queueing infrastructure that is fast, affordable, and backed by Azure storage accounts.
Azure Storage Queues feature a REST-based queuing mechanism with reliable and persistent messaging. They provide a minimal feature set, but are inexpensive and store millions of messages. Their capacity ranges up to 500 TB. A single message can be up to 64 KB in size.
You can access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. Storage queues can scale out to large numbers of concurrent clients to handle traffic spikes.
That said, there are limitations with the service:
Figure 4-13 shows the hierarchy of an Azure Storage Queue.

Figure 4-13. Storage queue hierarchy
In the previous figure, note how storage queues store their messages in the underlying Azure Storage account.
For developers, Microsoft provides several client and server-side libraries for Storage queue processing. Most major platforms are supported including .NET, Java, JavaScript, Ruby, Python, and Go. Developers should never communicate directly with these libraries. Doing so will tightly couple your microservice code to the Azure Storage Queue service. It's a better practice to insulate the implementation details of the API. Introduce an intermediation layer, or intermediate API, that exposes generic operations and encapsulates the concrete library. This loose coupling enables you to swap out one queuing service for another without having to make changes to the mainline service code.
Azure Storage queues are an economical option to implement command messaging in your cloud-native applications. Especially when a queue size will exceed 80 GB, or a simple feature set is acceptable. You only pay for the storage of the messages; there are no fixed hourly charges.

Azure Service Bus Queues

For more complex messaging requirements, consider Azure Service Bus queues.
Sitting atop a robust message infrastructure, Azure Service Bus supports a brokered messaging model. Messages are reliably stored in a broker (the queue) until received by the consumer. The queue guarantees First-In/First-Out (FIFO) message delivery, respecting the order in which messages were added to the queue.
The size of a message can be much larger, up to 256 KB. Messages are persisted in the queue for an unlimited period of time. Service Bus supports not only HTTP-based calls, but also provides full support for the AMQP protocol. AMQP is an open-standard across vendors that supports a binary protocol and higher degrees of reliability.
Service Bus provides a rich set of features, including transaction support and a duplicate detection feature. The queue guarantees "at most once delivery" per message. It automatically discards a message that has already been sent. If a producer is in doubt, it can resend the same message, and Service Bus guarantees that only one copy will be processed. Duplicate detection frees you from having to build additional infrastructure plumbing.
Two more enterprise features are partitioning and sessions. A conventional Service Bus queue is handled by a single message broker and stored in a single message store. But, Service Bus Partitioning spreads the queue across multiple message brokers and message stores. The overall throughput is no longer limited by the performance of a single message broker or messaging store. A temporary outage of a messaging store doesn't render a partitioned queue unavailable.
Service Bus Sessions provide a way to group-related messages. Imagine a workflow scenario where messages must be processed together and the operation completed at the end. To take advantage, sessions must be explicitly enabled for the queue and each related messaged must contain the same session ID.
However, there are some important caveats: Service Bus queues size is limited to 80 GB, which is much smaller than what's available from store queues. Additionally, Service Bus queues incur a base cost and charge per operation.
Figure 4-14 outlines the high-level architecture of a Service Bus queue.

Figure 4-14. Service Bus queue
In the previous figure, note the point-to-point relationship. Two instances of the same provider are enqueuing messages into a single Service Bus queue. Each message is consumed by only one of three consumer instances on the right. Next, we discuss how to implement messaging where different consumers may all be interested the same message.


Message queuing is an effective way to implement communication where a producer can asynchronously send a consumer a message. However, what happens when many different consumers are interested in the same message? A dedicated message queue for each consumer wouldn't scale well and would become difficult to manage.
To address this scenario, we move to the third type of message interaction, the event. One microservice announces that an action had occurred. Other microservices, if interested, react to the action, or event.
Eventing is a two-step process. For a given state change, a microservice publishes an event to a message broker, making it available to any other interested microservice. The interested microservice is notified by subscribing to the event in the message broker. You use the Publish/Subscribe pattern to implement event-based communication.
Figure 4-15 shows a shopping basket microservice publishing an event with two other microservices subscribing to it.

Figure 4-15. Event-Driven messaging
Note the event bus component that sits in the middle of the communication channel. It's a custom class that encapsulates the message broker and decouples it from the underlying application. The ordering and inventory microservices independently operate the event with no knowledge of each other, nor the shopping basket microservice. When the registered event is published to the event bus, they act upon it.
With eventing, we move from queuing technology to topics. A topic is similar to a queue, but supports a one-to-many messaging pattern. One microservice publishes a message. Multiple subscribing microservices can choose to receive and act upon that message. Figure 4-16 shows a topic architecture.

Figure 4-16. Topic architecture
In the previous figure, publishers send messages to the topic. At the end, subscribers receive messages from subscriptions. In the middle, the topic forwards messages to subscriptions based on a set of rules, shown in dark blue boxes. Rules act as a filter that forward specific messages to a subscription. Here, a "GetPrice" event would be sent to the price and logging Subscriptions as the logging subscription has chosen to receive all messages. A "GetInformation" event would be sent to the information and logging subscriptions.
The Azure cloud supports two different topic services: Azure Service Bus Topics and Azure EventGrid.

Azure Service Bus Topics

Sitting on top of the same robust brokered message model of Azure Service Bus queues are Azure Service Bus Topics. A topic can receive messages from multiple independent publishers and send messages to up to 2,000 subscribers. Subscriptions can be dynamically added or removed at runtime without stopping the system or recreating the topic.
Many advanced features from Azure Service Bus queues are also available for topics, including Duplicate Detection and Transaction support. By default, Service Bus topics are handled by a single message broker and stored in a single message store. But, Service Bus Partitioning scales a topic by spreading it across many message brokers and message stores.
Scheduled Message Delivery tags a message with a specific time for processing. The message won't appear in the topic before that time. Message Deferral enables you to defer a retrieval of a message to a later time. Both are commonly used in workflow processing scenarios where operations are processed in a particular order. You can postpone processing of received messages until prior work has been completed.
Service Bus topics are a robust and proven technology for enabling publish/subscribe communication in your cloud-native systems.

Azure Event Grid

While Azure Service Bus is a battle-tested messaging broker with a full set of enterprise features, Azure Event Grid is the new kid on the block.
At first glance, Event Grid may look like just another topic-based messaging system. However, it's different in many ways. Focused on event-driven workloads, it enables real-time event processing, deep Azure integration, and an open-platform - all on serverless infrastructure. It's designed for contemporary cloud-native and serverless applications
As a centralized eventing backplane, or pipe, Event Grid reacts to events inside Azure resources and from your own services.
Event notifications are published to an Event Grid Topic, which, in turn, routes each event to a subscription. Subscribers map to subscriptions and consume the events. Like Service Bus, Event Grid supports a filtered subscriber model where a subscription sets rule for the events it wishes to receive. Event Grid provides fast throughput with a guarantee of 10 million events per second enabling near real-time delivery - far more than what Azure Service Bus can generate.
A sweet spot for Event Grid is its deep integration into the fabric of Azure infrastructure. An Azure resource, such as Cosmos DB, can publish built-in events directly to other interested Azure resources - without the need for custom code. Event Grid can publish events from an Azure Subscription, Resource Group, or Service, giving developers fine-grained control over the lifecycle of cloud resources. However, Event Grid isn't limited to Azure. It's an open platform that can consume custom HTTP events published from applications or third-party services and route events to external subscribers.
When publishing and subscribing to native events from Azure resources, no coding is required. With simple configuration, you can integrate events from one Azure resource to another leveraging built-in plumbing for Topics and Subscriptions. Figure 4-17 shows the anatomy of Event Grid.

Figure 4-17. Event Grid anatomy
A major difference between EventGrid and Service Bus is the underlying message exchange pattern.
Service Bus implements an older style pull model in which the downstream subscriber actively polls the topic subscription for new messages. On the upside, this approach gives the subscriber full control of the pace at which it processes messages. It controls when and how many messages to process at any given time. Unread messages remain in the subscription until processed. A significant shortcoming is the latency between the time the event is generated and the polling operation that pulls that message to the subscriber for processing. Also, the overhead of constant polling for the next event consumes resources and money.
EventGrid, however, is different. It implements a push model in which events are sent to the EventHandlers as received, giving near real-time event delivery. It also reduces cost as the service is triggered only when it's needed to consume an event – not continually as with polling. That said, an event handler must handle the incoming load and provide throttling mechanisms to protect itself from becoming overwhelmed. Many Azure services that consume these events, such as Azure Functions and Logic Apps provide automatic autoscaling capabilities to handle increased loads.
Event Grid is a fully managed serverless cloud service. It dynamically scales based on your traffic and charges you only for your actual usage, not pre-purchased capacity. The first 100,000 operations per month are free – operations being defined as event ingress (incoming event notifications), subscription delivery attempts, management calls, and filtering by subject. With 99.99% availability, EventGrid guarantees the delivery of an event within a 24-hour period, with built-in retry functionality for unsuccessful delivery. Undelivered messages can be moved to a "dead-letter" queue for resolution. Unlike Azure Service Bus, Event Grid is tuned for fast performance and doesn't support features like ordered messaging, transactions, and sessions.

Streaming messages in the Azure cloud

Azure Service Bus and Event Grid provide great support for applications that expose single, discrete events like a new document has been inserted into a Cosmos DB. But, what if your cloud-native system needs to process a stream of related events? Event streams are more complex. They're typically time-ordered, interrelated, and must be processed as a group.
Azure Event Hub is a data streaming platform and event ingestion service that collects, transforms, and stores events. It's fine-tuned to capture streaming data, such as continuous event notifications emitted from a telemetry context. The service is highly scalable and can store and process millions of events per second. Shown in Figure 4-18, it's often a front door for an event pipeline, decoupling ingest stream from event consumption.

Figure 4-18. Azure Event Hub
Event Hub supports low latency and configurable time retention. Unlike queues and topics, Event Hubs keep event data after it's been read by a consumer. This feature enables other data analytic services, both internal and external, to replay the data for further analysis. Events stored in event hub are only deleted upon expiration of the retention period, which is one day by default, but configurable.
Event Hub supports common event publishing protocols including HTTPS and AMQP. It also supports Kafka 1.0. Existing Kafka applications can communicate with Event Hub using the Kafka protocol providing an alternative to managing large Kafka clusters. Many open-source cloud-native systems embrace Kafka.
Event Hubs implements message streaming through a partitioned consumer model in which each consumer only reads a specific subset, or partition, of the message stream. This pattern enables tremendous horizontal scale for event processing and provides other stream-focused features that are unavailable in queues and topics. A partition is an ordered sequence of events that is held in an event hub. As newer events arrive, they're added to the end of this sequence. Figure 4-19 shows partitioning in an Event Hub.

Figure 4-19. Event Hub partitioning
Instead of reading from the same resource, each consumer group reads across a subset, or partition, of the message stream.
For cloud-native applications that must stream large numbers of events, Azure Event Hub can be a robust and affordable solution.

About the Author:
Rob Vettor is a Principal Cloud-Native Architect for the Microservice Enterprise Service Group. Reach out to Rob at [[email protected]](mailto:[email protected]) or
submitted by robvettor to microservices [link] [comments]

IQ Option - How to Make Money with IQ Options Trading Strategy How to Analyze the Price of Bitcoin Up or Down BITCOIN TRADING Analysis Binary Options Trading With 30 Second Options At Pocket Option How to predict next candlestick │ Binary options trading │ Price action strategy │ iqoption How to trade on news │ Binary options news strategy │ Trading Bazaar │ Iqoption

An anonymous tipster sent her a private message with the name and personal details of the binary options site’s owner, information that is often a closely guarded secret in this industry. For those with busy lives, automated binary options trading systems can be a huge help. However, so many of these robot trading programs turn out to be frauds so it is imperative that one take some time to read online reviews to see if a system is reliable before they use it for trading.. Insiders Circle is one of the many trading bots that has appeared online in the last year. Insiders Information. Insiders information is a new free binary options trading program. The creators of this software claim that in 2014 over 1200 traders made $355,000 which equals to just over $900 and daily income. Insiders Circle states that it operates on a network of algorithms built to forecast market trends and price movements. In fact, this claim is typical for almost all the binary options robots out there. Trading in binary options is not a guessing game, and it is not about luck. Instead, it is about careful analysis of financial assets to make informed decisions. Not every trader has the time or the skills to do this analysis though. This is why binary options signals are so important. They are created by

[index] [1219] [3387] [4496] [5878] [8970] [10416] [6913] [9556] [4052] [778]

IQ Option - How to Make Money with IQ Options Trading Strategy

10. twenty-five. During this further information for binary options trading, you will understand everything you need to know about this binary options strategy method in Internet binary options ... Binary Options Trading With 30 Second Options At Pocket Option ... Registration link for Pocket option klick link bellow ... Property Insiders - Duration: 21:43. Michael Yardney 337 views. New; Insiders Information,Insiders Information Review Is this a scam.I don't know but we will give you an honest review on this product.Insiders Information is a binary options system that says you can ... 10. twenty-five. During this further information for binary options trading, you will understand everything you need to know about this binary options strategy method in Internet binary options ... 10. twenty-five. During this further information for binary options trading, you will understand everything you need to know about this binary options strategy method in Internet binary options ...