..Don’t blame God or nature. This is our fault…

Scientists call the interval since the Industrial Revolution the “Anthropocene, a period when our species has become the major factor altering the biological, physical and chemical properties of the planet on a geological scale. Empowered by fossil fuel–driven technologies, a rapidly growing human population and an insatiable demand for constant growth in consumption and the global economy, our species is responsible for the calamitous consequences.

We now know that the weight of water behind large dams and injecting pressurized water into the earth for fracking induce earthquakes. Clearing large swathes of forests, draining wetlands, depleting water for industrial agriculture, polluting marine and freshwater ecosystems with nitrogen, plastics and pesticides from farmland and cities, expanding urban areas, and employing ecologically destructive fishing practices such as drift nets and trawling, all combine to produce species extinction on a scale not seen since the mega-extinction of dinosaurs 65 million years ago.

But we use language to deflect blame from ourselves. Not long ago, wolves, seals and basking sharks were called “pests” or “vermin,” regarded as nuisances to be killed for bounties. Insects are the most numerous, diverse and important group of animals in ecosystems, yet all are affected by insecticides applied to eliminate the handful that attack commercial crops. One egregious class of pesticide is neonicotinoids, nerve toxins to which bees — important pollinators — are especially sensitive. Ancient forests are called “wild” or “decadent” while plantations that replace them after clear cutting are termed “normal.”

Environmentalists branded like criminals

One of the rarest ecosystems on Earth is the temperate rainforest stretching between Alaska and northern California, pinched between the Pacific Ocean and coastal mountains. The huge trees there have been decimated in the U.S. Fewer than 10 per cent remain. Yet environmentalists who called for the entire remnant to be protected from logging were branded as “greedy.”

Former B.C. premier Glen Clark famously labelled environmentalists like me “enemies of B.C.” Former federal finance minister Joe Oliver called us “foreign-funded radicals” while others said we were “eco-terrorists.” The real enemies, radicals and eco-terrorists are those who rush to destroy forests, watersheds or the atmosphere without regard to ecological consequences.

Recently-defeated B.C. premier Christy Clark called opponents of pipelines or LNG plants “forces of no.” We who want to protect what we all need to survive would more accurately be called “forces of know” who say “yes” to a future of clean, renewable energy and a rich environment.

The Great Bear Rainforest, pictured here, is one of the rarest ecosystems on Earth  Stretching 64,000 square kilometres from the northern tip of Vancouver Island to Alaska, it is home to some of the wildest and rares species of wildlife in Canada.

We seem to have forgotten that the word economy, like ecology, is based on the Greek oikos, meaning “domain” or “household.” Because of our ability to find ways to exploit our surroundings, humans are not confined to a specific habitat or ecosystem. We’ve found ways to live almost everywhere — in deserts, the Arctic, jungles, wetlands and mountains. Ecologists seek the principles, rules and laws that enable species to flourish sustainably. Economists are charged with “managing” our activity within the biosphere, our domain.

Former prime minister Stephen Harper decreed it was impossible to act to reduce greenhouse gas emissions to avoid climate change because it would destroy the economy. To people like him, the economy is more important than the air that provides weather and climate and enables us to live. At the same time, many “fiscal conservatives” rail against an effective market solution to climate change — carbon pricing — ignoring the example of Sweden, which imposed a carbon tax of about $35 a tonne in 1991, grew its economy by 60 per cent by 2012 while reducing emissions by 25 per cent, then raised the tax to more than $160 in 2014.

We know climate change is caused primarily by human use of fossil fuels. It’s influencing the frequency and intensity of such events as monstrous wildfires (Kelowna, Fort McMurray), floods (Calgary, Toronto), hurricanes (Katrina, Sandy), drought (California, Alberta), and loss of glaciers and ice sheets. There’s no longer anything “natural” about them.

We must acknowledge the human imprint. If we’re the cause of the problems, then we must stop blaming “nature” or “God.” We have to take responsibility and tackle them with the urgency they require.

 December 7th 2017

-30-

…………….be part of the solution or part of the problem…….everybody gets to choose…………w

 

 

 

 

 

 

 

 

……an aggravated one ………

 

 

 

 

 

..California Abalone diving prohibited next year…

……Population on the brink of collapse….

The decision came at a meeting of California Fish and Game Commission Thursday in San Diego, following a warning from scientists at the California Department of Fish and Wildlife that the population is in severe decline. The commission voted unanimously to close the fishery for one year, in 2018. The season would normally open in April.

“There are multiple indications that this fishery is collapsing,” said Cynthia Catton, an environmental scientist for the California Department of Fish and Wildlife. “There’s no sign that it’s even hit the bottom yet. We’re seeing continuing active mortality, we’re seeing continued starvation conditions.”

The decision to close or keep the abalone fishery open has created tensions between state biologists on one side, and members of the diving community and the Nature Conservancy on the other. The two sides disagree on the best way to maintain the sea snail’s dwindling population in light of severe environmental conditions, as well as on the best scientific methods to tract their population.

Kelp forest devastation over the past few years have led to starvation, mortality and low reproduction rates in red abalone, and an exploding population of purple sea urchin, which compete with abalone for food, has only made it worse. For the same reasons, the 2017 season for sport abalone fishing was reduced by two months and the annual limit was reduced from 18 to 12 per person.

However, because abalone take many years to reach reproductive age, “The consequences could last generations,” said Catton.

The abalone fishery south of San Francisco has been closed since 1997 for similar reasons, and the population has not yet recovered enough to reopen. That’s in contrast to the fishery’s heyday in the 1950s and ’60s, when California commercial fishermen brought in around 2,000 metric tons of different species of abalone annually. Commercial fishing was banned in the state in 1997.

In a letter it submitted last month to the Fish and Game Commission, the Nature Conservancy argued to keep the fishery open, offering an alternative way to monitor and manage the population, which it called “a conservative approach to resource management under the recent extreme environmental conditions, thereby ensuring full stock recovery, while still maintaining access to the resource.”

Yet avid divers like Jack Likins of Gualala (Mendocino County) argue that the abalone would actually be better protected if the legal fishing remains open in a limited capacity, because poaching would continue. They also worry it would never reopen once closed.

According to a management plan created 20 years ago, the state must close the fishery when the density of abalones in certain areas drops below a certain level. (The state is in the process of updating that plan.) Catton and a team of fellow scientists based at the UC Davis Bodega Marine Laboratory in Bodega Bay, along with volunteer divers, conduct the surveys each fall.

“It’s hard to describe the emotion that I felt doing the surveys this year. It was just heartbreaking,” she said. “Areas that I remember being lush with kelp, that I remember having to fight the kelp, now it’s bare rock. It’s just bare rock, with countless abalone shells littering the floor.”

In August and September, the divers surveyed the 10 most popular diving sites in Mendocino and Sonoma and found abalone at an average of .15 animals per square meter, which they consider half the bare minimum, which triggers a closure of the fishery. The density has dropped 65 percent since they conducted a survey last year, Catton said.

Also, Catton observed that the live animals had lost muscle mass, meaning they can’t reliably clamp onto rocks which makes them vulnerable to predators, including sea urchins and seagulls.

“It’s one of their primary defenses against predation — human predation or otherwise,” she said of a healthy abalone’s foot muscle. “It holds them in place and keeps them from getting washed up on shore with the waves.”

When abalones are starving, their reproductive organs also shrink. The other thing Catton found alarming was that abalone had moved mostly to shallow areas no more than 15 feet deep, she said. Normally a lot of them congregate in the deeper areas that most divers can’t access, which forms a natural protected nursery to keep the population going.

Yet Likins, who dives about 30 times a season, mostly to do volunteer surveys for the nonprofit organizations Reef Check and the Nature Conservancy, said things often look different to him than what the surveys represent. He also said that surveys have been proven to be problematic, based on a peer review of the department’s methods as well as analysis done by the Nature Conservancy.

“The main drawbacks of it basically are that it’s statistically unreliable,” said Likins. “There is so much variation from year to year.” He also said though the surveys are done at popular diving spots, they don’t account for vast areas of the coast where abalone inhabit.

Tara Duggan is a San Francisco Chronicle staff writer. Email: tduggan@sfchronicle.com Twitter: @taraduggan

 

 

-30-

…………………                                    w

……just another aggravated aggregator……………..

 

 

 

 

..The next step in the sequence is almost insultingly obvious….

Trump is preparing to shut down Robert Mueller’s investigation of Russian intervention in the 2016 election.

By

Robert Mueller. Andrew Harnik/AP Photo

..The Mueller Investigation Is in Mortal Danger…

If there was any single event that would cause the Republican elite to openly revolt against the ongoing Trumpification of their party, it would be the nomination of Roy Moore for U.S. Senate in Alabama. Even prior to the allegations of child molestation, Moore had discovered innovative new realms of extremism that had never occurred to even his most ideologically fervent colleagues. He proposed banning Muslims from serving in elected office, called for the criminalization of homosexuality, and defied court rulings and declared his own biblical jurisprudence the sole valid legal authority.

And if that revolt was going to begin anywhere, it would likely be in Utah. The state’s Mormon culture recoiled from Donald Trump’s libidinous boasting, erratic behavior, and displays of extravagant consumption. Between the 2012 and 2016 elections, Utah’s Republican presidential margin underwent an astonishing 28 percent collapse.

Orrin Hatch, who has represented Utah in the Senate since 1977, greeted Moore’s candidacy in this year’s election with skepticism. (“I have trouble with” Moore’s comments on gays and Muslims, he said in October.) Once evidence surfaced of Moore’s alleged predation of teenage girls, Hatch pulled the rip cord. “If the deeply disturbing allegations in the Washington Post are true, Senator Hatch believes that Judge Moore should step aside immediately,” his spokesman declared.

But even in Utah, there were forces at work to make Hatch reconsider. He was facing a potential primary challenge from a Trumpian candidate who had met with party insurrectionist Steve Bannon and Citizens United president David Bossie. In November, Hatch lavished praise on the president, calling him “one of the best I’ve served under.” Trump rewarded Hatch by endorsing him. Hatch then defended Trump’s endorsement of Moore, arguing that he “needs every Republican he can get so he can put his agenda through.”

Hatch’s response to Moore has followed that of his entire party, and the backtracking has usefully laid bare its power dynamics. As recently as a few weeks ago, Republicans were debating whether to shun Moore or, should he win, vote to expel him from the Senate. They have settled on a course of action that had initially been off the map altogether: endorsing their lecherous ayatollah and providing financial support from the Republican National Committee.

What mattered most was that Donald Trump has contempt for any standards of conduct. (Indeed, he reportedly has taken offense at the accusations against Moore, which remind him of his own treatment.) And no Republican who wishes to stay in office can afford to offend the president, who commands overwhelming support among the party base.

Would Republicans denounce him? Expel him? It turned out they would do nothing. By the time Moore came along, the party’s moral sensibilities had been worn to a nub.

The next step in the sequence is almost insultingly obvious. Trump is preparing to shut down Robert Mueller’s investigation of Russian intervention in the 2016 election.

The administration and its allied media organs, especially those owned by Rupert Murdoch, have spent months floating a series of rationales, of varying degrees of implausibility, for why a deeply respected Republican law-enforcement veteran is disqualified to lead the inquiry: He is friends with James Comey, who is biased because Trump fired him; Comey is biased because he pursued leads turned up in Christopher Steele’s investigation, which was financed by Democrats; Mueller has failed to investigate Hillary Clinton’s marginal-to-nonexistent role in a uranium sale.

The newest pseudo-scandal fixates on the role of Peter Strzok, an FBI official who helped tweak the language Comey employed in his statement condemning Clinton’s email carelessness and has also worked for Mueller.
His alleged crime is a series of text messages criticizing Trump. Mueller removed Strzok from his team, but that is not enough for Trump’s supporters, who are seizing on Strzok’s role as a pretext to discredit and remove Mueller, too. The notion that a law-enforcement official should be disqualified for privately expressing partisan views is a novel one, and certainly did not trouble Republicans last year, when Rudy Giuliani was boasting on television about his network of friendly agents. Yet in the conservative media, Mueller and Comey have assumed fiendish personae of almost Clintonian proportions.

When Mueller was appointed, legal scholars debated whether Trump had the technical authority to fire him, but even the majority who believed he did assumed such a power existed only in theory. Republicans in Congress, everyone believed, would never sit still for such a blatant cover-up.
Josh Blackman, a conservative lawyer, argued that Trump could remove the special counsel, but “make no mistake: Mueller’s firing would likely accelerate the end of the Trump administration.” Texas representative Mike McCaul declared in July, “If he fired Bob Mueller, I think you’d see a tremendous backlash, response from both Democrats but also House Republicans.” Such a rash move “could be the beginning of the end of the Trump presidency,” Senator Lindsey Graham proclaimed.

In August, members of both parties began drawing up legislation to prevent Trump from sacking Mueller. “The Mueller situation really gave rise to our thinking about how we can address the current situation,” explained Republican senator Thom Tillis, a sponsor of one of the bills. By early autumn, the momentum behind the effort had slowed; by Thanksgiving, Republican interest had melted away. “I don’t see any heightened kind of urgency, if you’re talking about some of the reports around Flynn and others,” Tillis said recently. “I don’t see any great risk.”

In fact, the risk has swelled. Trump has publicly declared any investigation into his finances would constitute a red line, and that he reserves the option to fire Mueller if he investigates them. Earlier this month, it was reported that Mueller has subpoenaed records at Deutsche Bank, an institution favored both by Trump and the Russian spy network.

John Dowd, a lawyer for Trump, recently floated the wildly expansive defense that a “president cannot obstruct justice, because he is the chief law-enforcement officer.” Fox News legal analyst Gregg Jarrett called the investigation “illegitimate and corrupt” and declared that “the FBI has become America’s secret police.” Graham is now calling for a special counsel to investigate “Clinton email scandal, Uranium One, role of Fusion GPS, and FBI and DOJ bias during 2016 campaign” — i.e., every anti-Mueller conspiracy theory. And perhaps as ominously, Trump’s allies have been surfacing fallback defenses. Yes, “some conspiratorial quid pro quo between somebody in the Trump campaign and somebody representing Vladimir Putin” is “possible,” allowed Wall Street Journal columnist Holman Jenkins, but “we would be stupid not to understand that other countries have a stake in the outcome of our elections and, by omission or commission, try to advance their interests.

This is reality.” The notion of a criminal conspiracy by a hostile nation to intervene in the election in return for pliant foreign policy has gone from unthinkable to blasé, an offense only to naïve bourgeois morality.

It is almost a maxim of the Trump era that the bounds of the unthinkable continuously shrink.

The capitulation to Moore was a dry run for the coming assault on the rule of law.

By

*This article appears in the December 11, 2017, issue of New York Magazine.

-30-

……….   the captain has turned on the seatbelt sign……………………W

 

………..

 

 

…Not a stand-up guy…

If you read about Louis C.K.’s actions, and if you understand and care about standup comedy, you might well be aghast. What he said and did was particularly manipulative, and particularly insidious.
Imagine yourself a female comic, talented but not yet successful, invited to the hotel room of Louis C.K., who is rightfully considered one of the best comics of all time.
He is cutting-edge — a man who, for example, managed to successfully deliver, on Saturday Night Live, a shtick that was at least ostensibly sympathetic to pedophiles. He did it because he knew how. He’s that good.So there you are, in his hotel room.
You are flattered to be there. Selfishly, perhaps, you think a friendship with Louis might provide a boost to your career.
And he looks you up and down and he says, deadpan, something like: “Do you mind if I take my clothes off and masturbate while looking at you?”
You laugh. Of course you laugh. It is funny. He is doing something sophisticated, from the standpoint of comedy, and is inviting you into a pretty rarefied club. He is making fun of romance by reducing the entire absurd mating dance to its most absurd, un-hypocritical center. Not, “Hey, can I buy you a drink?” Not, “Come here often?” The hell with all that. Let’s get down to the nakedly disgusting basics.That’s satire. That’s comedy.
So, yes, you laugh. This guy is edgy. Edge is good. Edge is the essence of the best comedy. And he seems to be honoring you by assuming you’ll get it.
Then he takes off his clothes.“Holy cow, this guy is really edgy.” See, you may well be extremely uncomfortable — who wouldn’t be? — but you also understand on some level that it’s the identical joke, but taken to a greater, edgier extreme. Edgy humor is supposed to make you uncomfortable. You think: This must be the way really great comics deal with each other: We are above niceties. We don’t have to pretend, among ourselves. We can tell it raw. And he is doing that, and he is doing that with you. He is respecting your talent. You are kind of grateful, maybe.
Then he … does it.Now where are you?This is why I really, really hate what Louis C.K. did to these women. He is taking advantage of their professional adulation of him, and of their ambition, and — more than anything else — of the professional comic’s endemic insecurity about their art, and manipulating them through the inherent ambiguity of humor. These women are comedians. He takes the thing they love and turns it against them.

So yeah, screw you, Louie.

========

Gene Weingarten on
This column is adapted from Gene’s Nov. 14 online chat introduction.

Below the Beltway

By Gene Weingarten

(c) 2017, The Washington Post Writers Group

-30-

……………So Louis has been getting attaboys for his acknowledgement and admission of the facts….and I won’t be rooting for his comeback tour ………….but I am rooting for the scales to make obvious the significant differences ……….in this power/sex/abuser context………….. between a Louis C.K. …….a Senator Al Franken …………………………………………………….and a serial abuser/rapist the likes of Harvey Weinstein …………       …accountability is crucial ….no assault can remain buried….no victim still afraid to come forward………….. all of which is bizarrely held hostage by President Pussy Grabber remaining in the White House …………..but in the meantime ….not just abusers but their facilitators and faceless friends who helped as well ………..stop or be stopped……………..w

 

 

 

 

 

 

 

 

 

 

 

 

 

 

….An Acoustic Evening at the Vienna Opera House…..

……. one of the greatest guitar players of his generation, Joe Bonamassa…

 

and if that’s not enough for now…..hope you don’t mind….🎶

….

…………

Published on Mar 22, 2013

Hailed worldwide as one of the greatest guitar players of his generation, Joe Bonamassa has almost single-handedly redefined the blues-rock genre and brought it into the mainstream. He continues this role with his first-ever entirely acoustic concert, recorded at the venerable Vienna Opera House with a global ensemble put together by longtime creative partner Kevin Shirley. The 2CD/2DVD/B, comes out March 26, 2013 on Bonamassa’s label J&R Adventures.

Bonamassa — a predominantly electric guitar player — was ready for a complete departure from his usual projects. For years, he had been wowing audiences with his flagship acoustic song “Woke Up Dreaming,” which has become an iconic staple of his world tour and a fan favorite. Building on the popularity of this song, Bonamassa and producer Shirley set out to design an entirely new and intimate “unplugged” concert experience that they would then bring to seven lucky cities in Europe during summer 2012. They recorded the performance in Vienna— the “City of Music” — at the Vienna Opera House, a culturally iconic venue steeped in history and heritage, making it the perfect backdrop for this unprecedented show.

An Acoustic Evening at the Vienna Opera House features gorgeously textured music — 20 songs, filmed in HD and recorded in Dolby 5.1 — made with a wealth of rare, vintage, organic and “oddball” instruments. The DVD and Blu-ray will feature 90 minutes of extra footage, interviews and a making-of documentary. Highlights among the 20 songs include favorites that span Bonamassa’s career—”The Ballad of John Henry,” “Woke Up Dreaming,” “Ball Peen Hammer,” “Sloe Gin,” and “Mountain Time”—including many he doesn’t normally perform live, such as “Athens to Athens,” “Black Lung Heartache,” “Jelly Roll,” “Around The Bend,” “Jockey Full of Bourbon,” “Seagull,” and “Richmond.” Accompanying Bonamassa on the same stage once graced by Mozart, Beethoven, Schubert, Brahms, Mahler, and Haydn are: traditional Irish fiddler Gerry O’Connor, who also plays mandolin and banjo; Swedish multi-instrumentalist Mats Wester on the nyckelharpa, a keyed fiddle; Los Angeles-based keyboardist Arlan Schierbaum texturing the mix with celeste, accordions, toy pianos, and assorted “organic” instruments; and renowned Puerto Rican percussionist Lenny Castro, whose works spans genres and reads like a who’s who of artists, including the Rolling Stones, Sir Elton John, Eric Clapton, Boz Scaggs, Toto, Steely Dan, Christopher Cross, Stevie Wonder, David Sanborn, Avenged Sevenfold, Little Feat, Tom Petty, the Red Hot Chili Peppers and many more.

-30-

……..🎶………………..your welcome………..w……….► FREE ALBUM DOWNLOAD – http://goo.gl/9oI018

 

 

 

 

 

 

… One of….10 Hacks For Safer Cyber Citizenship…

Chances are cyber criminal already have all your personal data.

Defrag This

 Now is the time to up your game with this series of 10 Hacks for Safer Cyber Citizenship.

October is National Cyber Security Awareness Month and it couldn’t come too soon.

……and flew by just like that……w

On the back of the announcement that 145.5 million people lost all of their personal data including social security numbers, birthdates and addresses thanks to Equifax, we all need to ratchet up our online security game.  So, this month, we’ll do a series of posts that will take you well down the path to being a cyber super guru.

Hack #1 Use the Force (of Math)

Mathematics is a crazy thing and some really simple knowledge can go a long way to improving your safety. There is this little trick of exponentials you can use to really improve your security. The power of exponents can best be explained by an old math trick.

Start the month by putting one penny in a piggy bank. Each day double yesterday’s contribution. On the second day you add 2¢, the third you add 4¢, etc.  After one week, you have a total savings of 32¢. Sounds boring and our simple minds would extrapolate that at this rate, we might have saved a few dollars after a month. But in fact, after 30 days you would have banked $2,684,354.56.

10-hacks-for-safer-cyber-citizenship.jpg

Now in reality, on the 29th you’d have to come up with over $1,300,000 to put in the piggy bank and you would need a pretty big piggy bank to pull this trick off.  But the trick shows the power of exponents and here we are only using an exponent of 2.

Now let’s apply this trick to passwords. If you use only numbers for a password you have an exponent of 10. Use the lower-case alphabet and you have an exponent of 26. And if you use symbols, numbers, upper and lower case alphabet characters, you have a much larger exponent. That is the math part.

Now for the computer part. You see, cybercriminals have tools that can generate guesses at a password really fast. The more complex your password, and the longer it is, the longer you would survive one of these password breakers. It is estimated that a 4-digit numeric PIN can be hacked by a computer in 22 milliseconds. A 6-character alphabetic password would only last 21 seconds. Using 6 characters that include symbols, numbers, upper and lower case letters raises that to 11 hours.

But hackers are patient and 11 hours isn’t very long for a computer.

Here is where the power of exponents come in.

Just add four more characters and it would take 91 millennia to hack your password.

That means, your 10-character password is un-hackable.

By KEVIN CONKLIN|

Oct 2017

Defrag This

| Read. Reflect. Reboot.

-30-

………..Did you hear that?….He used the U Word!…….Un- FrigginHackable…..BaBoom!………w

145.5 million people lost all of their personal data

 

 

 

 

 

 

 

 

..The death of Christianity …in the U.S….

…Christianity has died in the hands of Evangelicals……….so says Miguel De La Torre……..

Evangelicalism ceased being a religious faith tradition following Jesus’ teachings concerning justice for the betterment of humanity when it made a Faustian bargain for the sake of political influence. The beauty of the gospel message — of love, of peace and of fraternity — has been murdered by the ambitions of Trumpish flimflammers who have sold their souls for expediency. No greater proof is needed of the death of Christianity than the rush to defend a child molester in order to maintain a majority in the U.S. Senate.

Evangelicals have constructed an exclusive interpretation which fuses and confuses white supremacy with salvation. Only those from the dominant culture, along with their supposed inferiors who with colonized minds embrace assimilation, can be saved. But their salvation damns Jesus. To save Jesus from those claiming to be his heirs, we must wrench him from the hands of those who use him as a façade from which to hide their phobias — their fear of blacks, their fear of the undocumented, their fear of Muslims, their fear of everything queer.

Evangelicalism has ceased to be a faith perspective rooted on Jesus the Christ and has become a political movement whose beliefs repudiate all Jesus advocated. A message of hate permeates their pronouncements, evident in sulphurous proclamations like the Nashville Statement, which elevates centuries of sexual dysfunctionalities since the days of Augustine by imposing them upon Holy Writ. They condemn as sin those who express love outside the evangelical anti-body straight jacket.

.Evangelicalism’s unholy marriage to the Prosperity Gospel justifies multi-millionaire bilkers wearing holy vestments made of sheep’s clothing who discovered being profiteers rather than prophets delivers an earthly security never promised by the One in whose name they slaughter those who are hungry, thirsty and naked, and the alien among them. Christianity at a profit is an abomination before all that is Holy. From their gilded pedestals erected in white centers of wealth and power, they gaslight all to believe they are the ones being persecuted because of their faith.

Evangelicalism’s embrace of a new age of ignorance, blames homosexuality for Harvey’s rage rather than considering the scientific consequences climate change has on the number of increasing storms of greater and greater ferocity. To ignore the damage caused to God’s creation so the few can profit in raping Mother Earth causes celebrations in the fiery pits of Gehenna.

Evangelicalism forsakes holding a sexual predator, an adulterer, a liar and a racistaccountable, instead serving as a shield  against those who question POTUS’ immorality because of some warped reincarnation of Cyrus. Laying holy hands upon the incarnation of the very vices Jesus condemned to advance a political agenda — instead of rebuking and chastising in loving prayer — has prostituted the gospel in exchange for the victory of a Supreme Court pick.

Evangelicalism either remained silent or actually supported Charlottesville goose steppers because they protect their white privilege with the doublespeak of preserving heritage, leading them to equate opponents of fascist movements with the purveyors of hatred. Jesus has yet recovered from the vomiting induced by the Christian defenders of torch-wielding white nationalists calling for “blood-and-soil.”

The Evangelicals’ Jesus is satanic, and those who hustle this demon are “false apostles, deceitful workers, masquerading as apostles of Christ. And no wonder, for Satan himself masquerades as an angel of light. It is not surprising, then, if his servants also masquerade as servants of righteousness. Their end will be what their actions deserve” (2 Cor. 11:13-15, NIV).

You might wonder if my condemnation is too harsh. It is not, for the Spirit of the Lord has convicted me to shout from the mountaintop how God’s precious children are being devoured by the hatred and bigotry of those who have positioned themselves as the voice of God in America.

As a young man, I walked down the sawdust aisle at a Southern Baptist church and gave my heart to Jesus. Besides offering my broken heart, I also gave my mind to understanding God, and my arm to procuring God’s call for justice. I have always considered myself to be an evangelical, but I can no longer allow my name to be tarnished by that political party masquerading as Christian. Like many women and men of good will who still struggle to believe, but not in the evangelical political agenda, I too no longer want or wish to be associated with an ideology responsible for tearing humanity apart. But if you, dear reader, still cling to a hate-mongering ideology, may I humbly suggest you get saved.

 |  NOVEMBER 2017

-30-

………………..well….not the snappy ending I was rooting for………….but I did get this…… I realized I was weak on “Gehenna”…..so I Googled it and learned that it is a Biblical term that has been interpreted as analogous to the concept of “Hades“, “Hell” or “Purgatory“. ..ok….good enough….but the best part…… ….wait for it……”For other uses, see Gehenna (disambiguation).  Not to be confused with Gahanna, Ohio. ” ……………so I woke up in Gahanna……..feeling like hell…………….. it was only Ohio….   #Beelzebub

….and what about all those guys doing time in hell on a meat rap?……..w

 

 

 

 

 

 

 

 

 

 

….Hey Joe….

Joe Bonamassa – Live from The Royal Albert Hall 2009

…When moralizers get caught with their pants down…

…and this dirtbag could end up in the United States Senate……

When moralizers get caught with their pants down
Former Alabama Chief Justice and U.S. Senate candidate Roy Moore speaks at a rally, in Fairhope, Ala., on Sept. 25. 

From the LATimes Editorial Board

OK, let’s get this straight. Roy Moore, the self-righteous, Bible-thumping Alabama Republican running for the U.S. Senate has been accused of having a sexual encounter with a 14-year-old girl when he was 32. Three other women said he pursued them when they were between the ages of 16 and 18 and he was in his 30s. And who is Moore? A man who kept a gigantic monument of the Ten Commandments on his courthouse wall despite a judge’s order to remove it, and who has been a life-long public pontificator on behalf of traditional sexual morality. A rabble-rousing evangelical Christian who has condemned homosexuality, said that “the transgenders don’t have rights” and who has called the United States “a moral slum.”

Is there no commandment about hypocrisy? If not, there should be. Moore categorically denies the allegations, which were first reported by the Washington Post. And no one should be judged before all the evidence is in. Honestly, you can’t make this stuff up.

Yet we cannot be blamed if we feel we’ve seen this movie before. There has been no short supply, historically, of mendacious conservatives who sermonize about the way others live their lives, but fail to abide by the standards they so glibly set. David Vitter. Dennis Hastert. Larry Craig. Just last month, Rep. Tim Murphy, a Pennsylvania Republican who claimed to “stand for the dignity and value of all human life, both the born and the unborn” resigned from Congress after the local paper published text messages in which he urged his mistress to have an abortion. Honestly, you can’t make this stuff up. Out of pure cynicism and political expediency, they deny truths about themselves and others, set rules (and pass laws!) that they personally can’t or won’t obey.

Does that make them worse than other run-of-the-mill sinners? In some ways, it does. Louis C.K., unacceptable as his behavior may have been, didn’t inveigh sanctimoniously against masturbation. Quite the contrary.

As the Moore story developed, we were reminded on Facebook of a quote from the late Christopher Hitchens, for whom the puncturing of smug pieties was a lifelong commitment: “Whenever I hear some bigmouth in Washington or the Christian heartland banging on about the evils of sodomy or whatever, I mentally enter his name in my notebook and contentedly set my watch. Sooner, rather than later, he will be discovered down on his weary and well-worn old knees in some dreary motel or latrine, with an expired Visa card, having tried to pay well over the odds to be peed upon….”

One of our colleagues on the editorial board begs to differ with all this. The “hypocrisy” argument, he has written, is “an exaggerated evil.” He argues: “I’d want a legislator to vote for tough penalties against, say, kidnapping, even if he was incubating a plan to kidnap someone during the election recess.” Plus, he notes, some public figures see their duty as reflecting their constituents’ preferences, not their own — so maybe they’re not required to walk the walk. At the end of the day, he says, it’s not the hypocrisy that matters, but the underlying behavior that exposes it.

But the hypocrisy does matter. Fire-breathing preachers and members of Congress and other complacent moralizers ought to live the lives they insist others must live, or at the very least, be transparent about their inability to do so.

While it is true that few of us live up to standards every day of behavior we endorse in the abstract, most of us don’t set the rules for others or make broad pronouncements about whose chosen lifestyles are acceptable or legal.

Given the number of people who spoke on the record about Moore in the Washington Post, this is not a story that will — or should — go away just because Moore has issued a categorical denial. Even in the absence of any criminal charges, he should make himself available for a detailed interview about the specific allegations. Did he know these women and girls? What was his interaction with them? If he is not willing to address or refute their recollections, he should get out of the race.

-30-

………..we don’t often get an opportunity to put down an old dog like this……..shouldn’t be wasted…………………..w

 

…… American Gun Sickness…….

Pistols for sale at Target Masters, an indoor shooting center, in Garland, Texas on March 3.
Cooper Neill for The Washington Post via Getty Images

America is an exceptional country when it comes to guns. It’s one of the few countries  in which the right to bear arms is constitutionally protected. But America’s relationship with guns is unique in another crucial way: Among developed nations, the US is far and away the most violent — in large part due to the easy access many Americans have to firearms.

These charts and maps show what that violence looks like compared with the rest of the world, why it happens, and why it’s such a tough problem to fix.

1) America has six times as many firearm homicides as Canada, and nearly 16 times as many as Germany

Javier Zarracina/Vox

This chart, compiled using United Nations data collected by Simon Rogers for the Guardian, shows that America far and away leads other developed countries when it comes to gun-related homicides. Why? Extensive reviews of the research by the Harvard School of Public Health’s Injury Control Research Center suggest the answer is pretty simple: The US is an outlier on gun violence because it has way more guns than other developed nations.

2) America has 4.4 percent of the world’s population, but almost half of the civilian-owned guns around the world

Javier Zarracina/Vox

3) There have been more than 1,500 mass shootings since Sandy Hook

Soo Oh/Vox

In December 2012, a gunman walked into Sandy Hook Elementary School in Newtown, Connecticut, and killed 20 children, six adults, and himself. Since then, there have been at least 1,518 mass shootings, with at least 1,715 people killed and 6,089 wounded as of October 2017.

The counts come via the Gun Violence Archive, which has hosted a database that tracks mass shootings since 2013. But since some shootings go unreported, the database is likely missing some, as well as the details of some of the events.

The tracker uses a fairly broad definition of “mass shooting”: It includes not just shootings in which four or more people were murdered, but shootings in which four or more people were shot at all (excluding the shooter).

Even under this broad definition, it’s worth noting that mass shootings make up a tiny portion of America’s firearm deaths, which totaled more than 33,000 in 2014.

4) On average, there is more than one mass shooting for each day in America

Christopher Ingraham/Washington Post

Whenever a mass shooting occurs, supporters of gun rights often argue that it’s inappropriate to bring up political debates about gun control in the aftermath of a tragedy. For example, former Louisiana Gov. Bobby Jindal, a strong supporter of gun rights, criticized former President Barack Obama for “trying to score cheap political points” when Obama mentioned gun control after a mass shooting in Charleston, South Carolina.

But if this argument is followed to its logical end, then it will never be the right time to discuss mass shootings, as Christopher Ingraham pointed out at the Washington Post. Under the broader definition of mass shootings, America has nearly one mass shooting a day. So if lawmakers are forced to wait for a time when there isn’t a mass shooting to talk gun control, they could find themselves waiting for a very long time.

5) States with more guns have more gun deaths

Mother Jones

Using data from a study in Pediatrics and the Centers for Disease Control and Prevention, Mother Jones put together the chart above that shows states with more guns tend to have far more gun deaths. And it’s not just one study. “Within the United States, a wide array of empirical evidence indicates that more guns in a community leads to more homicide,” David Hemenway, the Harvard Injury Control Research Center’s director, wrote in Private Guns, Public Health.

Read more in Mother Jones’s “10 Pro-Gun Myths, Shot Down.”

6) It’s not just the US: Developed countries with more guns also have more gun deaths

Josh Tewksbury

7) States with tighter gun control laws have fewer gun-related deaths

Zara Matheson/Martin Prosperity Institute

When economist Richard Florida took a look at gun deaths and other social indicators, he found that higher populations, more stress, more immigrants, and more mental illness didn’t correlate with more gun deaths. But he did find one telling correlation: States with tighter gun control laws have fewer gun-related deaths. (Read more at Florida’s “The Geography of Gun Deaths.”)

This is backed by other research: A 2016 review of 130 studies in 10 countries, published in Epidemiologic Reviews, found that new legal restrictions on owning and purchasing guns tended to be followed by a drop in gun violence — a strong indicator that restricting access to guns can save lives.

8) Still, gun homicides (like all homicides) have declined over the past couple decades

The good news is that all firearm homicides, like all homicides and crime, have declined over the past two decades. (Although that may have changed in 2015 and 2016, with a recent rise in murders nationwide.)

There’s still a lot of debate among criminal justice experts about why this crime drop is occurring — some of the most credible ideas include mass incarceration, more and better policing, and reduced lead exposure from gasoline. But one theory that researchers have widely debunked is the idea that more guns have deterred crime — in fact, the opposite may be true, based on research compiled by the Harvard School of Public Health’s Injury Control Center.

9) Most gun deaths are suicides

Although America’s political debate about guns tends to focus on grisly mass shootings and murders, a majority of gun-related deaths in the US are suicides. As Dylan Matthews explained for Vox, this is actually one of the most compelling reasons for reducing access to guns — there is a lot of research that shows greater access to guns dramatically increases the risk of suicide.

10) The states with the most guns report the most suicides

11) Guns allow people to kill themselves much more easily

Estelle Caswell/Vox

Perhaps the reason access to guns so strongly contributes to suicides is that guns are much deadlier than alternatives like cutting and poison.

Jill Harkavy-Friedman, vice president of research for the American Foundation for Suicide Prevention, previously explained that this is why reducing access to guns can be so important to preventing suicides: Just stalling an attempt or making it less likely to result in death makes a huge difference.

“Time is really key to preventing suicide in a suicidal person,” Harkavy-Friedman said. “First, the crisis won’t last, so it will seem less dire and less hopeless with time. Second, it opens the opportunity for someone to help or for the suicidal person to reach out to someone to help. That’s why limiting access to lethal means is so powerful.”

She added, “[I]f we keep the method of suicide away from a person when they consider it, in that moment they will not switch to another method. It doesn’t mean they never will. But in that moment, their thinking is very inflexible and rigid. So it’s not like they say, ‘Oh, this isn’t going to work. I’m going to try something else.’ They generally can’t adjust their thinking, and they don’t switch methods.”

12) Programs that limit access to guns have decreased suicides

Estelle Caswell/Vox

When countries reduced access to guns, they saw a drop in the number of firearm suicides. The data above, taken from a study by Australian researchers, shows that suicides dropped dramatically after the Australian government set up a gun buyback program that reduced the number of firearms in the country by about one-fifth.

The Australian study found that buying back 3,500 guns per 100,000 people correlated with up to a 50 percent drop in firearm homicides, and a 74 percent drop in gun suicides. As Dylan Matthews noted for Vox, the drop in homicides wasn’t statistically significant. But the drop in suicides most definitely was — and the results are striking.

Australia is far from alone in these types of results. A study from Israeli researchers found that suicides among Israeli soldiers dropped by 40 percent — particularly on weekends — when the military stopped letting soldiers take their guns home over the weekend.

This data and research have a clear message: States and countries can significantly reduce the number of suicides by restricting access to guns.

13) Since the shooting of Michael Brown, police have killed at least 2,900 people

Soo Oh/Vox

Since police shooting of Michael Brown in Ferguson, Missouri, on August 9, 2014, police have killed at least 2,902 people as of May 2017.

Fatal Encounters, a nonprofit, has tracked these killings by collecting reports from the media, public, and law enforcement and verifying them through news reports. Some of the data is incomplete, with details about a victim’s race, age, and other factors sometimes missing. It also includes killings that were potentially legally justified, and is likely missing some killings entirely.

A huge majority of the 1,112 deaths on the map are from gunshots, which is hardly surprising given that guns are so deadly compared with other tools used by police. There are also noticeable numbers of fatalities from vehicle crashes, stun guns, and asphyxiations. In some cases, people died from stab wounds, medical emergencies, and what’s called “suicide by cop,” when people kill themselves by baiting a police officer into using deadly force.

14) In states with more guns, more police officers are also killed on duty

Given that states with more guns tend to have more homicides, it isn’t too surprising that, as a study in the American Journal of Public Health found, states with more guns also have more cops die in the line of duty.

Researchers looked at federal data for firearm ownership and homicides of police officers across the US over 15 years. They found that states with more gun ownership had more cops killed in homicides: Every 10 percent increase in firearm ownership correlated with 10 additional officers killed in homicides over the 15-year study period.

The findings could help explain why US police officers appear to kill more people than cops in other developed countries. For US police officers, the higher rates of guns and gun violence — even against them — in America mean they not only will encounter more guns and violence, but they can expect to encounter more guns and deadly violence, making them more likely to anticipate and perceive a threat and use deadly force as a result.

15) Support for gun ownership has sharply increased since the early ’90s

Over the past 20 years, Americans have clearly shifted from supporting gun control measures to greater support of “protecting the right of Americans to own guns,” according to Pew Research Center surveys. This shift has happened even as major mass shootings, such as the attacks on Columbine High School and Sandy Hook Elementary School, have received more press attention.

16) High-profile shootings don’t appear to lead to more support for gun control

Although mass shootings are often viewed as some of the worst acts of gun violence, they seem to have little effect on public opinion about gun rights, based on surveysfrom the Pew Research Center. That helps explain why Americans’ support for the right to own guns appears to be rising over the past 20 years even as more of these mass shootings make it to the news.

17) But specific gun control policies are fairly popular

Although Americans say they want to protect the right to bear arms, they’re very much supportive of many gun policy proposals — including some fairly contentious ideas, such as more background checks on private and gun show sales and banning semi-automatic and assault-style weapons, according to Pew Research Center surveys.

This type of contradiction isn’t exclusive to gun policy issues. For example, although most Americans in the past said they don’t like Obamacare, most of them also said they like the specific policies in the health-care law. Americans just don’t like some policy ideas until you get specific.

For people who believe the empirical evidence that more guns mean more violence, this contradiction is the source of a lot of frustration. Americans by and large support policies that reduce access to guns. But once these policies are proposed, they’re broadly spun by politicians and pundits into attempts to “take away your guns.” So nothing gets done, and preventable deaths keep occurring.

by

-30-

NOT NORMAL not normal…Bang…………….bang bang….NOT Normal NOT NormalNOT NormalNOT NormalNOT NormalNOT NormalNOT NORMAL…………….bang bang bang….NOT Normal NOT NormalNOT NormalNOT NormalNOT NormalNOT NormalNOT NORMAL…………….bang bang bang bang….NOT Normal NOT NormalNOT NormalNOT NormalNOT NormalNOT Normal NOT NOT NOT…..

w

Aggravated Aggregator Chews The News

 

….GOP Economics….Passing the Turd …By The Clean End……

…..you enter a room …looking for the mark……you can’t find the mark…………you are the mark!…….

Workers picket the New York State Capitol in Albany for a raise in the minimum wage in 1963.
Bettmann / Getty
In a rich, post-industrial society, where most people walk around with supercomputers in their pockets and a person can have virtually anything delivered to his or her doorstep overnight, it seems wrong that people who work should have to live in poverty. Yet in America, there are more than ten million members of the working poor: people in the workforce whose household income is below the poverty line. Looking around, it isn’t hard to understand why.
The two most common occupations in the United States are retail salesperson and cashier. Eight million people have one of those two jobs, which typically pay about $9–$10 per hour. It’s hard to make ends meet on such meager wages.
A few years ago, McDonald’s was embarrassed by the revelation that its internal help line was recommending that even a full-time restaurant employee apply for various forms of public assistance.Poverty in the midst of plenty exists because many working people simply don’t make very much money. This is possible because the minimum wage that businesses must pay is low: only $7.25 per hour in the United States in 2016 (although it is higher in some states and cities).
At that rate, a person working full-time for a whole year, with no vacations or holidays, earns about $15,000—which is below the poverty line for a family of two, let alone a family of four. A minimum-wage employee is poor enough to qualify for food stamps and, in most states, Medicaid.
Adjusted for inflation, the federal minimum is roughly the same as in the 1960s and 1970s, despite significant increases in average living standards over that period.
The United States currently has the lowest minimum wage, as a proportion of its average wage, of any advanced economy, contributing to today’s soaring levels of inequality. At first glance, it seems that raising the minimum wage would be a good way to combat poverty.
 
The argument against increasing the minimum wage often relies on what I call “economism”—the misleading application of basic lessons from Economics 101 to real-world problems, creating the illusion of consensus and reducing a complex topic to a simple, open-and-shut case.
According to economism, a pair of supply and demand curves proves that a minimum wage increases unemployment and hurts exactly the low-wage workers it is supposed to help.
The argument goes like this: Low-skilled labor is bought and sold in a market, just like any good or service, and its price should be set by supply and demand.
A minimum wage, however, upsets this happy equilibrium because it sets a price floor in the market for labor. If it is below the natural wage rate, then nothing changes. But if the minimum (say, $7.25 an hour) is above the natural wage (say, $6 per hour), it distorts the market.
More people want jobs at $7.25 than at $6, but companies want to hire fewer employees. The result: more unemployment.
The people who are still employed are better off, because they are being paid more for the same work; their gain is exactly balanced by their employers’ loss. But society as a whole is worse off, as transactions that would have benefited both buyers and suppliers of labor will not occur because of the minimum wage. These are jobs that someone would have been willing to do for less than $6 per hour and for which some company would have been willing to pay more than $6 per hour.
Now those jobs are gone, as well as the goods and services that they would have produced.
 
The minimum wage has been a hobgoblin of economism since its origins.
Henry Hazlitt wrote in Economics in One Lesson, “For a low wage you substitute unemployment. You do harm all around, with no comparable compensation.”
In Capitalism and Freedom, Milton Friedman patronizingly described the minimum wage as “about as clear a case as one can find of a measure the effects of which are precisely the opposite of those intended by the men of good will who support it.” Because employers will not pay people more money than their work is worth, he continued, “insofar as minimum-wage laws have any effect at all, their effect is clearly to increase poverty.”
Jude Wanniski similarly concluded in The Way the World Works, “Every increase in the minimum wage induces a decline in real output and a decline in employment.”
On the campaign trail in 1980, Ronald Reagan said, “The minimum wage has caused more misery and unemployment than anything since the Great Depression.” Think tanks including Cato, Heritage, and the Manhattan Institute have reliably attacked the minimum wage for decades, all the while emphasizing the key lesson from Economics 101: Higher wages cause employers to cut jobs.In today’s environment of increasing economic inequality, the minimum wage is a centerpiece of political debate.
California, New York City, and Seattle are all raising their minimums to $15, and President Barack Obama called for a federal minimum of $10.10.
An army of commentators has responded by reminding us of what we should have learned in Economics 101. In The Wall Street Journal, the economist Richard Vedder explained, “If the price of something rises, people buy less of it—including labor. Thus governmental interferences such as minimum-wage laws lower the quantity of labor demanded.”
Writing for Forbes, Tim Worstall offered a mathematical proof: “A reduction in wage costs of some few thousand dollars increases employment. Obviously therefore a rise in wage costs of four or five times that is going to have significant unemployment effects.
QED: A $15 minimum wage is going to destroy many jobs.” (Of theoretical arguments in favor of a higher minimum wage, he continued, “I’m afraid I really just don’t believe those arguments.”)
Jonah Goldberg of the American Enterprise Institute and National Review chimed in, “A minimum wage is no different from a tax on firms that use low-wage and unskilled labor. And if there’s anything that economists agree upon, it’s that if you tax something you get less of it.”

The real impact of the minimum wage, however, is much less clear than these talking points might indicate. Looking at historical experience, there is no obvious relationship between the minimum wage and unemployment: adjusted for inflation, the federal minimum was highest from 1967 through 1969, when the unemployment rate was below 4 percent—a historically low level. When economists try to tackle this question, they come up with all sorts of results.

In 1994, David Card and Alan Krueger evaluated an increase in New Jersey’s minimum wage by comparing fast-food restaurants on both sides of the New Jersey-Pennsylvania border. They concluded, “Contrary to the central prediction of the textbook model … we find no evidence that the rise in New Jersey’s minimum wage reduced employment at fast-food restaurants in the state.”

Card and Krueger’s findings have been vigorously contested across dozens of empirical studies. Today, people on both sides of the debate can cite papers supporting their position, and reviews of the academic research disagree on what conclusions to draw.

David Neumark and William Wascher, economists who have long argued against the minimum wage, reviewed more than one hundred empirical papers in 2006. Although the studies had a wide range of results, they concluded that the “preponderance of the evidence” indicated that a higher minimum wage does increase unemployment.

On the other hand, two recent meta-studies (which pool together the results of multiple analyses) have found that increasing the minimum wage does not have a significant impact on employment. In the past several years, a new round of sophisticated analysescomparing changes in employment levels between neighboring counties also found “strong earnings effects and no employment effects of minimum wage increases.” (That is, the number of jobs stays the same and workers make more money.)

Not surprisingly, Neumark and Wascher have contested this approach. The profession as a whole is divided on the topic: When the University of Chicago Booth School of Business asked a panel of prominent economists in 2013 whether increasing the minimum wage to $9 would “make it noticeably harder for low-skilled workers to find employment,” the responses were split down the middle.

The idea that a higher minimum wage might not increase unemployment runs directly counter to the lessons of Economics 101.

According to the textbook, if labor becomes more expensive, companies buy less of it. But there are several reasons why the real world does not behave so predictably. Although the standard model predicts that employers will replace workers with machines if wages increase, additional labor-saving technologies are not available to every company at a reasonable cost. Small employers in particular have limited flexibility; at their scale, they may not be able to maintain their operations with fewer workers. (Imagine a local copy shop: No matter how fast the copy machine is, there still needs to be one person to deal with customers.) Therefore, some companies can’t lay off employees if the minimum wage is increased.

At the other extreme, very large employers may have enough market power that the usual supply-and-demand model doesn’t apply to them. They can reduce the wage level by hiring fewer workers (only those willing to work for low pay), just as a monopolist can boost prices by cutting production (think of an oil cartel, for example). A minimum wage forces them to pay more, which eliminates the incentive to minimize their workforce.

In the above examples, a higher minimum wage will raise labor costs. But many companies can recoup cost increases in the form of higher prices; because most of their customers are not poor, the net effect is to transfer money from higher-income to lower-income families. In addition, companies that pay more often benefit from higher employee productivity, offsetting the growth in labor costs.

Justin Wolfers and Jan Zilinsky identified several reasons why higher wages boost productivity: They motivate people to work harder, they attract higher-skilled workers, and they reduce employee turnover, lowering hiring and training costs, among other things. If fewer people quit their jobs, that also reduces the number of people who are out of work at any one time because they’re looking for something better. A higher minimum wage motivates more people to enter the labor force, raising both employment and output.

Finally, higher pay increases workers’ buying power. Because poor people spend a relatively large proportion of their income, a higher minimum wage can boost overall economic activity and stimulate economic growth, creating more jobs. All of these factors vastly complicate the two-dimensional diagram taught in Economics 101 and help explain why a higher minimum wage does not necessarily throw people out of work. The supply-and-demand diagram is a good conceptual starting point for thinking about the minimum wage. But on its own, it has limited predictive value in the much more complex real world.

Even if a higher minimum wage does cause some people to lose their jobs, that cost has to be balanced against the benefit of greater earnings for other low-income workers.
A study by the Congressional Budget Office (CBO) estimated that a $10.10 minimum would reduce employment by 500,000 jobs but would increase incomes for most poor families, moving 900,000 people above the poverty line.
Similarly, a recent paper by the economist Arindrajit Dube finds that a 10 percent raise in the minimum wage should reduce the number of families living in poverty by around 2 percent to 3 percent.
The economists polled in the 2013 Chicago Booth study thought that increasing the minimum wage would be a good idea because its potential impact on employment would be outweighed by the benefits to people who were still able to find jobs. Raising the minimum wage would also reduce inequality by narrowing the pay gap between low-income and higher-income workers.
In short, whether the minimum wage should be increased (or eliminated) is a complicated question. The economic research is difficult to parse, and arguments often turn on sophisticated econometric details.
Any change in the minimum wage would have different effects on different groups of people, and should also be compared with other policies that could help the working poor—such as the negative income tax (a cash grant to low-income households, similar to today’s Earned Income Tax Credit) favored by Milton Friedman, or the guaranteed minimum income that Friedrich Hayek assumed would exist.

Nevertheless, when the topic reaches the national stage, it is economism’s facile punch line that gets delivered, along with its all-purpose dismissal: people who want a higher minimum wage just don’t understand economics (although, by that standard, several Nobel Prize winners don’t understand economics). Many leading political figures largely repeat the central theses of economism, claiming that they have only the best interests of the poor at heart.

This conviction that the minimum wage hurts the poor is an example of economism in action.

In the 2016 presidential campaign, Senator Marco Rubio opposed increasing the minimum wage because companies would then substitute capital for labor: “I’m worried about the people whose wage is going to go down to zero because you’ve made them more expensive than a machine.”

Senator Ted Cruz also chimed in on behalf of the poor, saying, “the minimum wage consistently hurts the most vulnerable.”

Senator Rand Paul explained, “when the [minimum wage] is above the market wage it causes unemployment” because it reduces the number of employees whom companies can afford to hire.

The former governor Jeb Bush also invoked Economics 101, saying that wages should be left “to the private sector,” meaning companies like Walmart, which “raised wages because of supply and demand.”

For Congressman Paul Ryan, raising the minimum wage is “bad economics” and “will hurt the economy because it raises the price of labor.”

This conviction that the minimum wage hurts the poor is an example of economism in action. Economists have many different opinions on the subject, based on different theories and research studies, but when it comes to public debate, one particular result of one particular model is presented as an unassailable economic theorem. (Politicians advocating for a higher minimum wage, by contrast, tend to avoid economic models altogether, instead arguing in terms of fairness or helping the poor.) This happens partly because the competitive market model taught in introductory economics classes is simple, clear, and memorable. But it also happens because there is a large interest group that wants to keep the minimum wage low: businesses that rely heavily on cheap labor.

The restaurant industry has been a major force behind the advertising and public relations campaigns opposing the minimum wage, including many of the op-ed articles repeating the basic lesson of supply and demand.
For example, Andy Puzder, the CEO of a restaurant company (and President-elect Trump’s nominee to lead the Labor Department), explained in The Wall Street Journal, “Every retailer has locations that are profitable, but only marginally. Increased labor costs can push these stores over the line and into the loss column. When that happens, companies that want to stay competitive will close them.” As a result, “broad increases in the minimum wage destroy jobs and hurt the working-class Americans that they are supposed to help.”
A recent study by researchers at the Cornell School of Hotel Administration, however, found that higher minimum wages have not affected either the number of restaurants or the number of people that they employ, contrary to the industry’s dire predictions, while they have modestly increased workers’ pay. Because restaurant closings do not seem to increase, the implication is that paying employees more cuts into excess profits—profits beyond those necessary to stay in business.
Or, as the financial commentator Barry Ritholtz put it, “raising the minimum wage works as a wealth transfer, from shareholders and franchisees, to minimum wage workers.”
But instead of greedily demanding higher profits, industry executives can invoke Economics 101, which provides a simple explanation of the world that serves their interests.
 
The fact that this is the debate already demonstrates the historical influence of economism. Once upon a time, the major issue affecting workers’ wages and income inequality was unionization.
In the 1950s, about one in every three wage and salary employees was a union member. Unions, of course, were an early and frequent target of economism. Hayek argued that unions are bad both for workers, because “they cannot in the long run increase real wages for all wishing to work above the level that would establish itself in a free market,” and for society as a whole, because “by establishing effective monopolies in the supply of the different kinds of labor, the unions will prevent competition from acting as an effective regulator of the allocation of all resources.”
For Friedman, unions “harmed the public at large and workers as a whole by distorting the use of labor” while increasing inequality even within the working class. The changing composition of the U.S. workforce, state right-to-work laws, and aggressive anti-unionization tactics by employers—increasingly tolerated by the National Labor Relations Board, beginning with the Reagan administration—all contributed to a long, slow fall in unionization levels.
By 2015, only 12 percent of wage and salary employees were union members—fewer than 7 percent in the private sector. Low- and middle-income workers’ reduced bargaining power is a major reason why their wages have not kept pace with the overall growth of the economy.
According to an analysis by the sociologists Bruce Western and Jake Rosenfeld, one-fifth to one-third of the increase in inequality between 1973 and 2007 results from the decline of unions.
With unions only a distant memory for many people, federal minimum-wage legislation has become the best hope for propping up wages for low-income workers.
And again, the worldview of economism comes to the aid of employers by abstracting away from the reality of low-wage work to a pristine world ruled by the “law” of supply and demand.

####

Want to receive exclusive insights from The Atlantic—while supporting a sustainable future for independent journalism? Join our new membership program, The Masthead.

-30-

 

………….BaddaBook BaddaBoom!…..Where is Samuel Gompers when you need him?……….Question?……An Unwitting Dupe Is Still A Dupe ….right?………………..w

 

….Aggravated Aggregator…..Chewing News….

..Anita Hill on Weinstein and Trump…

A Watershed Moment for Sexual-Harassment Accusations 

During the 2016 Presidential campaign, eleven women accused Donald Trump of making unwanted sexual advances toward them. Following a well-worn playbook used by other previous accused sexual harassers, Trump dismissed the women as “horrible, horrible liars” and their allegations as “pure fiction.” The women’s voices swayed very few voters, it would seem.
Even after the “Access Hollywood” tape surfaced, allowing voters to hear Trump boasting about “grabbing” women “by the pussy,” he was elected President. Among those who put his candidacy over the top (at least in the Electoral College) were fifty-three per cent of white female voters.

So why have Harvey Weinstein’s alleged transgressions been taken so much more seriously? One answer, it seems, has less to do with the accused than with the accuser. Weinstein’s sexual-harassment scandal is unlike almost every other in recent memory because many of his accusers are celebrities, with status, fame, and success commensurate to his own.

Sexual harassment is about power, not sex, and it has taken women of extraordinary power to overcome the disadvantage that most accusers face. As Susan Faludi, the author of “Backlash: the Undeclared War Against Women,” put it in an e-mail to me, “Power belongs only to the celebrities these days. If only Trump had harassed Angelina Jolie. . . .”

Anita Hill, a woman with unusual insight into this topic, agrees that the nature of Weinstein’s accusers is the reason that his exposure has proved to be a watershed moment. In a phone interview, Hill emphasized that sexual-harassment cases live and die on the basis of “believability,” and that, in order for the accusers to prevail, “they have to fit a narrative” that the public will buy. At least until now, very few women have had that standing.

Twenty-six years ago, Hill learned this the hard way, when, as a young Yale Law School graduate, she famously testified that Clarence Thomas was unsuitable for confirmation to the Supreme Court, on the grounds that he had repeatedly harassed her while he served as her boss, at the Equal Employment Opportunity Commission. (I wrote about the confirmation process and Hill’s allegations in the book “Strange Justice: The Selling of Clarence Thomas.”) Her testimony blasted the subject of workplace sexual harassment into the public consciousness, but it was swept aside by the Senate.

In televised public congressional hearings, Hill’s credibility was attacked, her character smeared, and her sworn testimony dismissed as an unresolvable “he said, she said” conflict. After Thomas described the process as a “high-tech lynching”—despite the fact that both he and Hill are African-American—the Senate confirmed him.

Hill, who is now a law professor at Brandeis University, told me that what Thomas possessed, like many accused harassers, and unlike many accusers, was a winning “narrative.” The lynching story resonated deeply. Without a similarly widely accepted narrative, Hill was vulnerable to detractors supplying their own readings—imputing false motives, insinuating psychological problems, and smearing her, as the American Spectator notoriously did, as “a bit nutty and a bit slutty.”

In contrast, Hill pointed out, “the Hollywood-starlet narrative is part of the folklore. The casting couch is a long-standing issue.” In addition, she told me, “people often believe the myth that only conventionally beautiful women are harassed—and so it didn’t seem that far-fetched to people that this would happen to beautiful starlets who we all know and love.”

Charges levied at political figures, Hill believes, face a particularly high hurdle. Her case, like those of the women who accused Trump, she says, “was cast as a political story.” In such situations, “everything gets interpreted through a political lens, and it makes it almost impossible” for people to seriously consider whether the accused harasser “is the right person to represent you. It just becomes, ‘This is our guy’ and ‘people are trying to bring him down.’ ”

Meanwhile, as Jessica Leeds, who accused Trump, during the campaign, of groping her on a plane thirty years ago, told the Washington Post, “It is hard to reconcile that Harvey Weinstein could be brought down with this, and [President] Trump just continues to be the Teflon Don.” Melinda McGillivray, another accuser, told the Post that she, too, was having trouble accepting the double standard. “What pisses me off is that the guy is president,” she said. McGillivray accused Trump of grabbing her at Mar-a-Lago, in 2003, when she was twenty-three years old.

Hill says she is “hopeful” that, in light of the Weinstein affair and other recent sexual-harassment revelations against powerful bosses, “people will revisit the women” who accused Trump. But she fears that the Weinstein lesson “won’t translate to everyday women, or even those in high-profile careers in places like Silicon Valley,” who still don’t have the fame, success, and standing of movie stars.

“We need to transfer the believability,” Hill said. She argued that the public needs to understand that Gwyneth Paltrow and Angelina Jolie “are just like women down the street. People need to take this moment to make clear that this is not just about Hollywood.”

  • Jane Mayer has been a New Yorker staff writer since 1995.

-30-

………….no chance this changes much …while this pussy grabber remains in business…….w

 

…GOP to America….Screw You!….

The GOP Tax Plan Tells Us Everything About Who Matters In American Democracy

Your boss, not you.

DREW ANGERER/GETTY IMAGES
President Donald Trump, flanked by Speaker Paul Ryan and Rep. Kevin Brady, pitches the GOP’s tax plan on Nov. 2.

The United States is the richest country in the history of the world. Last year, the genius and muscle of the American people generated more than $18.6 trillion in wealth. This year, our brains and brawn will combine to create well over $19 trillion. Despite all the debt theatrics of the Republican Party during the Obama presidency, we owe just $6.2 trillion to other countries ― less than four months of our collective labors at their present value.

Under these circumstances, the question of what the American government can afford is functionally meaningless. If any nation has ever been able to afford quality housing, education, health care, parks, museums ― anything ― the United States can.

And we don’t need to tax anyone, rich or poor, in order to afford these fine things. The wealth — the fruits of our labor — already exists. Taxes are a way of managing the bookkeeping system, of setting national priorities for the distribution of wealth created by good ideas and hard work.

That’s key: Our country’s wealth is created by everybody. It’s not created by rich people. Rich people are what happen when the bookkeeping units we use to keep track of that wealth — the dollars ― get stuck on particular individuals. Sometimes these people fall into the world possessing such accounting anomalies in the form of inheritances. Sometimes they siphon them from other people through the daily operations of commerce. Sometimes Washington decides to hand them more.

On Thursday, President Donald Trump, House Speaker Paul Ryan (R-Wis.) and congressional Republicans proposed a multitrillion-dollar tax cut for a particular slice of very wealthy citizens. There is much more than math at stake: These are matters of justice, social prestige and political power. There is no economic law that governs how the $19 trillion we produce each year must be distributed. Figuring out who should get how much of that $19 trillion is a political choice — and the Republicans’ choice is to give much of that money to a few hundred financial dynasties.

The GOP says its plan is an effort to “fix our broken tax code,” and there can be no doubt that the code is broken. Our fabulously wealthy nation is mysteriously plagued by poverty. More than 40 million Americans currently live in poverty, including 11.5 million children. Over 41 million people live in what the U.S. Department of Agriculture defines as “food insecure households.” Millions of Americans literally could not afford to eat at some point during 2016. Families living a little higher up the economic ladder generally have a tenuous hold on their middle-class status: 78 percent of U.S. households report living paycheck to paycheck.

These economic troubles persist as Wall Street and Silicon Valley are increasingly dividing the spoils of the broader economy among themselves. The financial sector is supposed to function as a sort of utility for manufacturing, agriculture and other elements of what economists call the “real” economy. But today it accounts for nearly 30 percent of corporate profits — about triple its share from three decades ago. Since 2000, compensation in the financial sector has increased at nearly three times the overall rate in the economy. Apple, Amazon, Google and Facebook now mimic financial giants by acquiring tech startup after tech startup and then using their merged muscle to consume the profitable activity of others. Google and Facebook together take in 60 percent of the digital advertising market and collected 99 percent of all online ad revenue growth in the past year.

The GOP tax plan won’t resolve any of those problems. Republicans have assembled a host of tax changes that will ensure that more and more of the nation’s wealth goes to the people who already have most of it. It’s a strategy to inflate existing fortunes, increase profits on Wall Street and enhance the social dominance of people who make their living from investments over people who make their living earning wages and salaries.

The heart of the Republican plan is a permanent cut to the corporate tax rate from 35 percent to 20 percent. The benefits of that will accrue to people who own corporations. If you hold stock in a corporation and the tax rate on that corporation’s profits falls, the value of your stock will rise. You become wealthier without doing anything. It doesn’t matter if the company you own pays its workers a living wage, develops state-of-the-art technology or names violent felons to its board of directors. However prudently or recklessly that company had been earning a profit, it will suddenly become more valuable to its owners.

That’s just great if you own tons of stock. But according to Gallup polling, only about half of Americans own any stock at all — through a retirement account or otherwise. Most households that do own stock don’t own very much. Only 22 percent of households own at least $25,000 worth of stock, research from New York University economist Edward Wolff shows.

At the top of the distribution, however, stock ownership accounts for a tremendous share of new wealth. On average, households in the top 1 percent receive about 36 percent of their income from financial assets, while the 400 wealthiest American households receive almost 75 percent of their income from capital gains and dividends. It’s not hard to figure out the target market here.

Not so long ago, President George W. Bush took a lot of heat for slashing the tax on capital gains — the tax you pay when you sell stocks, bonds or real estate investments. It was, critics said, a shameless giveaway to the idle rich. The current GOP tax plan offers a similar benefit without the hassle of having to actually sell any investments: When stock prices go up due to a drop in corporate taxes, the people who own the stock get richer. And because the GOP bill will also ultimately eliminate the estate tax, owners of financial assets could pass their holdings to their heirs tax-free in perpetuity, allowing financial dynasties to grow and grow independently of the intelligence or enterprise of those stewarding any particular generation of family wealth.

No trained economist seriously believes that shoveling unearned benefits to people who just happen to own or inherit financial assets is good for growth, productivity or anything else. And Republicans aren’t really trying to hide what they’re up to. In addition to the corporate rate cut, the GOP would allow companies to immediately write off the full value of new capital investments — when, say, a company purchases new equipment or technology. This tax perk would actually encourage some worthwhile activity: If businesses invest in improving their longer-term operations, they will realize an immediate economic benefit from the tax code. Cutting the corporate tax rate down to 20 percent doesn’t encourage anything except a one-time jolt to asset prices.

When Republicans dole out big tax cuts, they typically offer something for low- and middle-income families to make the process a little less unseemly. It’s not clear if they’ll be able to include those perks this time around, and the slapdash, piecemeal approach to helping everyone outside the capital class makes their priorities perfectly clear. When the tax framework was released Thursday morning, House Ways and Means Committee Chairman Kevin Brady (R-Texas) talked up a slightly more generous child tax credit and some additional deductions for middle-class families. The bill itself shows many of the additional deductions are countered by the elimination of other popular deductions. The expanded child tax credit expires after five years, to be replaced by an increase in taxes on families with children, while the corporate rate drops to 20 percent forever. Republicans on Thursday couldn’t promisethat their bill wouldn’t ultimately raise taxes on middle-class families. It’s even conceivable that bona fide rich people could end up worse off under the plan if they draw their incomes from a high garden-variety salary, rather than stock or interest.

So never mind the budget deficits and growth projections. The GOP tax plan is a simple political statement about who matters more in American democracy: the heirs to hedge fund fortunes or everyone else in the country. Trump and the Republicans have chosen the dynasts. This is not a necessity; we could easily afford a different set of priorities. Social domination by the financial sector is a choice.

…🎼catch a falling star and put it in your pocket…..

……from the ….do I Really.. want to see it coming?………department…..

NASA JPL latest news release
Astronomers Complete First International Asteroid Tracking ExerciseAn international team of astronomers led by NASA scientists successfully completed the first global exercise using a real asteroid to test global response capabilities.

Planning for the so-called “TC4 Observation Campaign” started in April, under the sponsorship of NASA’s Planetary Defense Coordination Office. The exercise commenced in earnest in late July, when the European Southern Observatory’s Very Large Telescope recovered the asteroid. The finale was a close approach to Earth in mid-October. The goal: to recover, track and characterize a real asteroid as a potential impactor — and to test the International Asteroid Warning Network for hazardous asteroid observations, modeling, prediction and communication.

The target of the exercise was asteroid 2012 TC4 — a small asteroid originally estimated to be between 30 and 100 feet (10 and 30 meters) in size, which was known to be on a very close approach to Earth. On Oct. 12, TC4 safely passed Earth at a distance of only about 27,200 miles (43,780 kilometers) above Earth’s surface. In the months leading up to the flyby, astronomers from the U.S., Canada, Colombia, Germany, Israel, Italy, Japan, the Netherlands, Russia and South Africa all tracked TC4 from ground- and space-based telescopes to study its orbit, shape, rotation and composition.

“This campaign was an excellent test of a real threat case. I learned that in many cases we are already well-prepared; communication and the openness of the community was fantastic,” said Detlef Koschny, co-manager of the near-Earth object (NEO) segment in the European Space Agency (ESA)’s Space Situational Awareness program. “I personally was not prepared enough for the high response from the public and media — I was positively surprised by that! It shows that what we are doing is relevant.”

“The 2012 TC4 campaign was a superb opportunity for researchers to demonstrate willingness and readiness to participate in serious international cooperation in addressing the potential hazard to Earth posed by NEOs,” said Boris Shustov, science director for the Institute of Astronomy at the Russian Academy of Sciences. “I am pleased to see how scientists from different countries effectively and enthusiastically worked together toward a common goal, and that the Russian-Ukrainian observatory in Terskol was able to contribute to the effort.” Shustov added, “In the future I am confident that such international observing campaigns will become common practice.”

Using the observations collected during the campaign, scientists at NASA’s Center for Near-Earth Object Studies (CNEOS) at the Jet Propulsion Laboratory in Pasadena, California were able to precisely calculate TC4’s orbit, predict its flyby distance on Oct. 12, and look for any possibility of a future impact. “The high-quality observations from optical and radar telescopes have enabled us to rule out any future impacts between the Earth and 2012 TC4,” said Davide Farnocchia from CNEOS, who led the orbit determination effort. “These observations also help us understand subtle effects such as solar radiation pressure that can gently nudge the orbit of small asteroids.”

A network of optical telescopes also worked together to study how fast TC4 rotates. Given that TC4 is small, astronomers expected it to be rotating fast, but were surprised when they found that TC4 was not only spinning once every 12 minutes, it was also tumbling. “The rotational campaign was a true international effort. We had astronomers from several countries working together as one team to study TC4’s tumbling behavior,” said Eileen Ryan, director of the Magdalena Ridge Observatory. Her team tracked TC4 for about 2 months using the 7.9-foot (2.4-meter) telescope in Socorro, New Mexico.

The observations that revealed the shape and confirmed the composition of the asteroid came from astronomers using NASA’s Goldstone Deep Space Network antenna in California and the National Radio Astronomy Observatory’s 330-foot (100-meter) Green Bank Telescope in West Virginia. “TC4 is a very elongated asteroid that’s about 50 feet (15 meters) long and roughly 25 feet (8 meters) wide,” said Marina Brozovic, a member of the asteroid radar team at JPL.

Finding out what TC4 is made of turned out to be more challenging. Due to adverse weather conditions, traditional NASA assets studying asteroid composition — such as the NASA Infrared Telescope Facility (IRTF) at the Mauna Kea Observatory in Hawaii — were unable to narrow down what TC4 was made of: either dark, carbon-rich or bright igneous material.

“Radar has the ability to identify asteroids with surfaces made of highly reflective rocky or metallic materials,” said Lance Benner, who led the radar observations at JPL. “We were able to show that radar scattering properties are consistent with a bright rocky surface, similar to a particular class of meteorites that reflect as much as 50 percent of the light falling on them.”

In addition to the observation campaign, NASA used this exercise to test communications between the many observers and also to test internal U.S. government messaging and communications up through the executive branch and across government agencies, as it would during an actual predicted impact emergency.

“We demonstrated that we could organize a large, worldwide observing campaign on a short timeline, and communicate results efficiently,” said Vishnu Reddy of the University of Arizona’s Lunar and Planetary Laboratory in Tucson, who led the observation campaign. Michael Kelley, TC4 exercise lead at NASA Headquarters in Washington added, “We are much better prepared today to deal with the threat of a potentially hazardous asteroid than we were before the TC4 campaign.”

NASA’s Planetary Defense Coordination Office administers the Near-Earth Object Observations Program and is responsible for finding, tracking and characterizing potentially hazardous asteroids and comets coming near Earth, issuing warnings about possible impacts, and assisting coordination of U.S. government response planning, should there be an actual impact threat.

-30-

…….gotta be a way way fun planning committee there……….w

 

…Announcing: Your Current Phone Is Garbage!…

…that’s right…that’s right….you know it’s true…..

Technotropolis, Calif.—With the release of our latest operating system and our newest line of smartphones, we’re excited to announce that Your Current Phone is now an obsolete piece of shit!

We’ve been working non-stop to minimize your device’s functionality while increasing the amount of memory required to handle mandatory updates. By the time we’re done tampering with that radioactive paperweight you still call a smartphone, you’ll be banging at the door of your nearest production plant, begging for any shiny device with a reasonable battery life.

“Your Current Phone was the best product we’d ever made, but our New Model is even better. You should buy it,” J. Ander Nichols, senior vice-president of constant marketing, said.

Outdated Hardware, Overpowering Software

Notice that your apps are unmistakably slower? You must’ve finally given in to those near-constant update notifications we’ve been blasting out!

Introducing the world’s (temporarily) most powerful operating system, “OSn+1,” also fondly known as the Reason Your Current Phone Dies When You Try to Do Anything with Less Than Seventy-eight Per Cent Battery Life.

Snapchatting without Wi-Fi? Your phone just crashed. Listening to the “Hamilton” soundtrack while scrolling through devastating political news? Not with this cold, black screen you’re holding. Booking a car to J.F.K.? Hope you memorized the license plate! (Dead phone.)

Designed for a Wired Past

Your Current Phone continues to use a Poorly Made Charging Cord, assembled from rusty old Chinese metal. We salvaged it from the ashes of one of our burned-down factories. That warehouse went up in flames! Probably faulty wiring. Anyway, your phone can barely carry enough juice to light a birthday candle, so keep that cable close.

Oh, and is that a headphone jack, or are you just a sorry, stingy little loser? Both! While everyone else has gotten audio buds implanted into their skulls, you’re still hanging strings from your ears. Cute.

Cracked Display

Unlike our New Model’s gorgeous Liquid Graphene Super Duper Diamond Orgasm Retina Display, your current screen looks like shards of glass held together with Trident gum. Don’t bother getting a protective case at this point: everything on the market has already been stretched and/or shrunk to fit our slightly bigger and/or smaller New Models!

Have you noticed those random patches of your touch screen that don’t respond anymore? We threw those in with the new OS! Is it water damage? Heat damage? Who knows. We’ll let you explain to your mother why you “physically couldn’t pick up” when she called you six times to ask if you got the promotion yet. Take a good look at Your Current Phone, and then ask if you would hire you for the job.

Your Current Phone Comes in One Color: Faded

Remember when Your Current Phone was that new Space Something color? It was so modern, so exciting. Its current color, Faded, is an unappealing pale finish intended to remind you of what once was. All those ambitious projects you started but never got around to completing. The people in your life who inevitably faded away, even after promising to “hang out sometime soon.” The old flame you thought could be “the one” but turned out to be yet another dead end on the path of your sad, meaningless life.

You Look Like an Idiot

Why are you still using that old thing? Do you also take a steamboat to work? Do you travel by handcar on a railroad like some cartoon character from the twenties? You don’t deserve green text messages, let alone blue. We should change the color of your texts to red, so we self-respecting members of society can steer clear of you and that thing you pretend is still technology.

Is Something Wrong?

You were once a valued customer! What’s holding you back now? Don’t you want to feel special again? Are you having trouble accessing our Web site? Are there issues at home? Is it the price? Have you tried making more money? We have lots of money—you could always work for us!

Sorry, We Went Too Far

Clearly, you’re struggling right now and don’t need us to rub it in. Enjoy being a bullheaded Luddite with that fossil of a phone glued to your palm. Far be it from us to try and save you from yourself.

The New Model Is Already Obsolete

Just take it. The Even Newer Model is where it’s at!

By and

-30-

…….it that ain’t fun………I don’t want none…….

w

……it’s all right…

……….even if it ain’t……

End Of The Line

..California billionaire launches ads urging Trump impeachment…

SACRAMENTO, Calif. (AP) — California billionaire Tom Steyer announced Friday that he will dump at least $10 million into a national television advertising campaign calling for President Donald Trump’s impeachment.

In the ad, Steyer argues Trump should be ousted from office because he has edged the country toward nuclear war, obstructed justice at the FBI and threatened to shut down news organizations he does not like. He urges viewers to call their members of Congress and tell them to bring articles of impeachment.

“People in Congress and his own administration know this president is a clear and present danger who is mentally unstable and armed with nuclear weapons,” Steyer says in the ad. “And they do nothing.

 Steyer wrote a letter to Democrats on Wednesday asking that if they take control of congress in 2018, that they impeach President Trump. FULL TEXT BELOW 

Steyer plans to spend eight figures to air the television ads nationally, but he would not give an exact amount. His investment comes as he considers running against U.S. Sen. Dianne Feinstein, a fellow Democrat, and as Democrats in Washington argue over whether efforts to impeach Trump are smart or worthwhile.

“If Democrats want to appease the far left and their liberal mega-donors by supporting a baseless, radical effort that the vast majority of Americans disagree with, then have at it,” said Michael Ahrens, a spokesman for the Republican National Committee.

Republicans will focus on “issues voters actually care about,” such as the economy and cutting taxes, he said. The White House did not immediately respond to a request for comment. 

Steyer also said he will spend seven figures on an accompanying digital ad campaign.

An impeachment resolution brought last week by Democratic U.S. Rep. Al Green of Texas died before coming up for a vote. Green has vowed to try again.

But Democrats such as House Minority Leader Nancy Pelosi of California think impeachment attempts are not worthwhile because they will fail in the Republican-led Congress and could energize GOP voters heading into the next election.

Steyer has poured his wealth into a variety of political efforts, mostly focused on stopping climate change.

Kathleen Ronayne, Associated Press Friday, October 20, 2017

 

-30-

THE LETTER

Wednesday, October 18, 2017

Dear Elected Official,

This is not a time for “patience” — Donald Trump is not fit for office. It is evident that there is zero reason to believe “he can be a good president.”

Whether by the nature of Mr. Trump’s relationship with Vladimir Putin and Russia, his willingness to exploit the office of the Presidency for his personal gain and treat the government like a family enterprise, his conduct during Charlottesville, his decision to pull out of the Paris climate accords, or his seeming determination to take the nation to war, he has violated the Constitution, the office of the Presidency, and the trust of the public. He is a clear and present danger to the United States of America.

Republican Senator Bob Corker, Chair of the Senate Foreign Relations Committee, referred to the Trump White House as a day care center, and observed that this president has put us “on the path to World War III.” This comes following reports that Trump’s own Secretary of State referred to him as a “moron” and that Chief of Staff John Kelly and Secretary of Defense James Mattis have an agreement not to leave Trump home alone for fear of what he could do. And we have seen other Republican Senators, including Senators Sasse and Flake, express their own profound concerns.

If Trump has lost the trust of the members of his own administration and leading members of his own party, surely it is time to act.

An accounting of his record to date leads to the same conclusion. He is turning his back on Lady Liberty by holding immigrant children hostage. He is actively sabotaging the Affordable Care Act — a law he is constitutionally obligated to faithfully execute — while seeking to strip away health care coverage that will leave millions of Americans to choose between life and bankruptcy. He is repealing clean air protections and unleashing polluters, even as increasingly catastrophic natural disasters supercharged by our warming planet ravaged the country throughout the summer — from hurricanes Harvey, Irma, and Maria, to the wildfires that have raged across California, Oregon, Washington, Idaho, and Montana. He has threatened to reduce aid for millions of American citizens in Puerto Rico who are struggling to survive without drinkable water or electricity — a move that would be a total dereliction of his duty. And every day, Americans are left bracing for a Twitter screed that could set off a nuclear war. These actions represent systemic attacks on our nation’s future. They endanger every single one of your constituents. That’s why you have a duty to speak out.

There is no moral reason to remain silent about this. Constitutional experts like Noah Feldman have already laid out clear legal and historical foundations for impeachment.  Founding Father Alexander Hamilton, a co-author of the Federalist Papers — and an immigrant himself — argued that “high crimes and misdemeanors” could be defined as “abuse or violation of some public trust.” This president has clearly already exceeded these standards. Congress has impeached past presidents for far less.

While we know that Republicans do not seem prepared to pursue impeachment even as members in their own ranks openly question Trump’s fitness for office, we are all working hard to ensure Democrats will take back the House and Senate in 2018.

Given Trump’s total lack of fitness for office, the question of impeachment becomes a very real issue should we succeed in our midterm goal. That makes it imperative for every Governor of every state, and every mayor of every city, to acknowledge where they stand. This question affects the lives of every single American. They deserve to hear whether or not our party is willing to do what is necessary to protect them and their families. This is not an academic exercise. The very stability of the Republic is at stake.

So, by way of this letter, I am asking you today to make public your position on the impeachment of Donald Trump, and to urge your federal representatives to remove him from office at once. Every day he remains in reach of the nuclear codes is another day for him to menace the citizens you serve and protect. Your constituents deserve to know they are represented by people in every level of government who have the patriotism and political courage to stand up and take action when it is so desperately needed.  This is not a time to give in to an establishment that insists on acting the way the establishment always does, with “patience” or “caution.” It is an unprecedented moment, and it calls for extraordinary measures. We cannot remain fixated on what is politically smart. We have to do what is morally right.

Sincerely,

Tom Steyer

….and now for the enablers…the facilitators….the cover up stooges…. all agents ….on Harvey’s team…..

……..always feel the need to take a shower after one of these…….

Harvey Weinstein’s Former Employees Reckon With What They Knew and What They Didn’t

Discussing who knew what about Harvey Weinstein has become a grim, un-fun Hollywood parlor game.

Harvey throttled someone. Harvey called an employee a fucking moron. Harvey threw the shoes, the book, the phone, the eggs. Harvey went to work with his shirt on inside-out and no one had the courage to tell him. If you fucking say anything to him, the assistant said to the other assistant, I’m dead. Harvey would eat the fries off your plate, smash them in his face, and wash them down with a cigarette and a Diet Coke. He belittled and berated: You can’t name three Frank Capra movies? What the fuck are you even doing here?He was funny; he was grotesque, a boisterous, boorish, outrageous, gluttonous caricature of a man, a Hollywood type. A “man of appetites”; a philanderer; a cartoon beast, surrounded by beauties. Years later, the people who worked for him—survivors, they called themselves, of Miramax and the Weinstein Company—still met regularly to tell stories about Harvey Weinstein. “I always thought it was interesting that a lot of people who left Miramax either ended up running shit in Hollywood or became social workers,” an alumna of the company told me.

Harvey stories have a new valence now, in the aftermath of revelations by the Times and by The New Yorker, and the term “survivors” must be reserved for those who have alleged intense sexual harassment, assault, and rape. (Through a representative, Weinstein has denied all accusations of non-consensual sex.) The stories aren’t funny anymore, because now we know the story behind them. Weinstein was not a philanderer, with inordinately, unaccountably attractive “girlfriends”; he was, apparently, according to the forty-some women who have come forward so far, including many of Hollywood’s most visible celebrities, engaged in quid-pro-quo harassment that, in certain cases, involved coercion and physical force. But, unlike Donald Trump, our show-biz President, a bully who has boasted of sexual assault and been accused of sexual misconduct numerous times, Weinstein is finally being condemned and punished for his treatment of women. (Trump denies all allegations of sexual misconduct.)

Workplace sexual assault, according to the feminist legal scholar Catharine MacKinnon, is “dominance eroticized.” More than misplaced desire, she writes, it is “an expression of dominance laced with impersonal contempt, the habit of getting what one wants, and the perception (usually accurate) that the situation can be safely exploited in this way—all expressed sexually.” Among the many painful ironies of Weinstein’s public activities (the professorship in Gloria Steinem’s name that he helped endow, his support of Hillary Clinton), the one I find the most brutal and defeating is that he made movies with substantial and three-dimensional parts for women, and it was this rare commodity that he is said to have used to exploit the women who wanted those roles. Their desire for professional advancement demeaned them—even after he’d made some of them into stars. (He never let them forget it: who made them, who owned them.) There were rumors, yes, of the did-she-or-didn’t-she variety. Because the actresses were ambitious, they were seen as “ambitious,” and his predation went on, hiding in plain view. No one ever asked, Did he? That was the given, and it is only now that the abuse is being called by its true name. The company’s reputation for artistic integrity and highbrow fare was a disguise that Harvey Weinstein wore, his version of the black-ski-mask cliché.

Terry Press, the president of CBS Films, told me that Weinstein’s legendary bullying contributed to the silence within his company. “I worked at DreamWorks for ten years,” she said. “It’s a private company. No one threw an ashtray at someone’s head. Nobody called someone the C-word in a meeting. I consider many people at the Weinstein Company to have suffered some sort of Stockholm syndrome. You’d say to them, ‘Hello, in the real world this is actionable.’ In a private company, the owners dictate the culture. If you go to meetings and someone’s physically accosting an employee, the message it sends is, It’s a free-for-all, no rules and no decorum.”

In the past two weeks, discussing who knew what and when has become a grim, un-fun Hollywood parlor game, playing out on social media, at dinner parties, over drinks. The screenwriter Scott Rosenberg, who benefitted from Weinstein’s largesse and support for the better part of a decade, in the heyday of Miramax, recently posted a screed on Facebook (since taken down or made private), addressing his own complicity and that of “You, the big producers; you, the big directors; you, the big agents; you, the big financiers. And you, the big rival studio chiefs; you, the big actors; you, the big actresses; you, the big models. You, the big journalists; you, the big screenwriters; you, the big rock stars; you, the big restaurateurs; you, the big politicians.” He writes, “You know who are. You know that you knew. And do you know how I know that you knew? Because I was there with you. And because everybody-fucking-knew.” They didn’t literally know about the rape, he writes, but “We knew something was bubbling under. Something odious. Something rotten.” Judd Apatow wrote on Twitter, “Sell that company for scrap.” A few days later, speaking at an industry luncheon hosted by Variety, he expanded on his remark. “And what about his staff? People say, ‘Did they know?’ Of course they knew.”

But many current and former Weinstein Company employees have come forward in recent days to insist that, in fact, they didn’t know. This week, several employees at the Weinstein Company’s New York office drafted a statement defending themselves, which they submitted to The New Yorker. The document, which they say has the support of approximately thirty of their colleagues at the Weinstein Company, is anonymous: it’s unclear, with the company in turmoil, whether the nondisclosure agreements they signed as a condition of employment will be enforced. One supporter of the statement told me, “This awful helpless feeling of being vilified for something you never knew was creating this feeling of true despair.” The statement reads, in part:

We all knew that we were working for a man with an infamous temper. We did not know we were working for a serial sexual predator. We knew that our boss could be manipulative. We did not know that he used his power to systematically assault and silence women. We had an idea that he was a womanizer who had extra-marital affairs. We did not know he was a violent aggressor and alleged rapist.

But to say that we are shocked and surprised only makes us part of the problem.

Our company was built on Harvey’s unbridled ambition—his aggressive deal making, his insatiable desire to win and get what he wanted, his unabashed love for celebrity—these traits were legendary, and the art they produced made an indelible mark on the entertainment industry.

But we now know that behind closed doors, these were the same traits that made him a monster. He created a toxic ecosystem where his abuse could flourish unchecked for decades.

An assistant who co-wrote the letter described to me, by phone, the events of October 5th, the day the first Times story was published. Harvey came in to work at 375 Greenwich Street, his fiefdom (his brother Bob worked at a different address), where he had a “lair”: in addition to an office, there was a large living room with a commodious couch and trophy walls of photographs of Harvey and his stars. He expressed satisfaction that the piece had come out on a Thursday rather than a Sunday, when, by his reckoning, more people would have seen it. The assistant told Harvey that he was resigning from his position. (He is hoping to be reassigned within the company.) Harvey offered to provide a reference—he didn’t yet understand how undesirable that would be. Later, as the assistant was leaving to spend the afternoon drinking and strategizing with his colleagues at a nearby pub, he says that Harvey reached for his arm. Sobbing, Harvey said, “I’m not that guy. I’m not that guy.”

On the following Tuesday, the staff convened in a conference room, with soul-food takeout from Bubby’s. As they gathered, someone mentioned that The New Yorker story was up. The assembled employees read in silence. They listened to the tape. They knew that voice too well. Some began to shake, and many of them wept as they contemplated the roles they might have played as accomplices, unwitting or not. “People were having a wave of retroactive memories,” a creative executive who worked on the letter told me. “Some of the stories were within the time frame of people who still worked there.” A longtime employee offered to answer questions based on his experiences travelling with Harvey. There was a silence, and then, according to the creative executive, “One of the female assistants was, like, ‘Tell us everything.’ ”

In the time since, people both inside and outside Hollywood have been processing the reality that Harvey Weinstein is “that guy.” In fitting revenge for his reduction of women to bodies, there has been thorough discussion of Weinstein’s own ungainliness and girth (not incidental, as he allegedly used his imposing size to threaten, impede, and overwhelm his victims). Fired from the Weinstein Company, external validation stripped away, he’s now just a body and its urges—not the passionate filmmaker responsible for eighty-one Oscar wins but the animal who allegedly masturbated into a potted plant, or a kitchen pot, or both. (A Weinstein spokesperson told The New Yorker, “There are many stories about Harvey Weinstein that have become urban legend. Some are true and some are not.”)

On Saturday evening, a few hours after Weinstein was expelled from the Academy of Motion Picture Arts and Sciences, a group of women, former assistants and executives at Miramax and the Weinstein Company, gathered at a house in Los Angeles. They stood around a kitchen island, nibbling on grapes and cheese and drinking wine, while the rice water boiled and the hostess’s husband put the kids to bed. The women said they hadn’t known—they were not “honeypots”—but they were struggling to make sense of how Weinstein’s behavior had gone unchecked. They were dealing with the twin discomforts of having their entire professional community wonder if they were complicit in or victims of his assaults, or both. They wanted Harvey’s downfall to mean something and to create real change within their industry and in the world.

“You feel a little bit like an idiot,” the hostess said. “There were things you knew. Clearly there was also a strategy on his part. He could be flamboyant in his ‘People can know I’m a womanizer.’ But the idea that he took it to sexual assault or even rape was really well hidden.”

The woman standing to my left, in bluejeans, said, “Looking back, the problem is that the unspoken message we were being given from the powers that be across media, Hollywood, and politics is that he can get away with this shit.”

“But get away with what?” a woman in black said. “At the time, you didn’t know this was happening. What you knew was that he was a bully, a screamer, a yeller, a thrower, a pig—not that he was a rapist.” She said that she and her husband got into a fight when the news broke. He insisted that she and her friends must have known.

The hostess said, “The public lynching has been so severe that I think it’s a huge warning call to men in the future. Probably there are people—any number of agents—”

“I want to talk about that,” the woman in jeans said. “The larger culture of harassment and bullying, because you don’t feel like you can come out and report something. The patriarchy is creating this environment for men and women of misogyny and sexism. There is somehow this understanding that you can be this caricature of being bombastic and bullying and treating your underlings—”

“As inhuman,” a fourth woman, chopping chicken, said.

“When this shit happens, a woman doesn’t know who she can turn to, because everyone seems to have a blind eye to it,” the woman in jeans said. “The people around him, his enablers, and there had to have been enablers, men and women, perpetuated the bullying culture. As long as that’s O.K., we’re in trouble, we can’t get out from under it.”

The woman in black said, “It’s naïve to think that Harvey is the only Harvey out there.” (On Tuesday, Harvey Weinstein’s business partner and brother, Bob, who has called Harvey “indefensible and crazy,” was accused of sexual harassment by a showrunner on a Weinstein Company television project, a claim he denies.)

After a while, the hostess said that she had a Harvey story, one whose import she only now understood. She had never told her friends; it didn’t seem like a big enough deal before. After she’d worked at Miramax for a couple of years, a position opened to be Weinstein’s assistant. She wanted to be a producer, so she interviewed for the job. “You’re too pretty to be working for Harvey,” a senior female executive told her. “It will embarrass him.” Confused but undeterred, she persisted. Finally, one of Weinstein’s former assistants took her out to lunch. “Do not take this job,” the former assistant said. “You will see things you will never be able to unsee, and you will do things you will never forgive yourself for.” She didn’t have enough information to comprehend the warning, but she heeded it anyway. The gravity of her near-miss is still sinking in. “There are obviously people that knew,” she said. “And, if they knew, and they knew you, they would protect you.”

The hostess walked me to the door. She had one last point to make. As Hollywood reckoned with its own culture and how to evolve it, there was a more pressing change she did not want people to lose sight of.

“Please, may this empower people to step forward about Trump, and we can bring him down,” she said. With Gwyneth Paltrow and Angelina Jolie and countless others speaking out about Weinstein—and more than five hundred thousand women sharing their own experiences with sexual harassment under the hashtag #metoo—the floodgates are open. (On Sunday, BuzzFeed reported that a former contestant on “The Apprentice,” who has accused Trump of groping and kissing her, had subpoenaed his campaign for documentation related to “any woman alleging that Donald J. Trump touched her inappropriately.” Trump has denied her allegations.) The hostess told me, “Trump women can come through and throw him down. That would be the biggest play women can make. That’s what we need to do.”

..The Coming Software Apocalypse…

…………….this one is a hike!………….milk and cookies and food fight …. at the end……………w

A small group of programmers wants to change how we code—before catastrophe strikes.

 

The 911 outage, at the time the largest ever reported, was traced to software running on a server in Englewood, Colorado.
Operated by a systems provider named Intrado, the server kept a running counter of how many calls it had routed to 911 dispatchers around the country. Intrado programmers had set a threshold for how high the counter could go. They picked a number in the millions.Shortly before midnight on April 10, the counter exceeded that number, resulting in chaos. Because the counter was used to generating a unique identifier for each call, new calls were rejected. And because the programmers hadn’t anticipated the problem, they hadn’t created alarms to call attention to it. Nobody knew what was happening. Dispatch centers in Washington, California, Florida, the Carolinas, and Minnesota, serving 11 million Americans, struggled to make sense of reports that callers were getting busy signals. It took until morning to realize that Intrado’s software in Englewood was responsible, and that the fix was to change a single number.
Not long ago, emergency calls were handled locally. Outages were small and easily diagnosed and fixed. The rise of cellphones and the promise of new capabilities—what if you could text 911? or send videos to the dispatcher?—drove the development of a more complex system that relied on the internet. For the first time, there could be such a thing as a national 911 outage. There have now been four in as many years.It’s been said that software is “eating the world.”

More and more, critical systems that were once controlled mechanically, or by people, are coming to depend on code.

This was perhaps never clearer than in the summer of 2015, when on a single day, United Airlines grounded its fleet because of a problem with its departure-management system; trading was suspended on the New York Stock Exchange after an upgrade; the front page of The Wall Street Journal’s website crashed; and Seattle’s 911 system went down again, this time because a different router failed. The simultaneous failure of so many software systems smelled at first of a coordinated cyberattack. Almost more frightening was the realization, late in the day, that it was just a coincidence.“When we had electromechanical systems, we used to be able to test them exhaustively,” says Nancy Leveson, a professor of aeronautics and astronautics at the Massachusetts Institute of Technology who has been studying software safety for 35 years. She became known for her report on the Therac-25, a radiation-therapy machine that killed six patients because of a software error. “We used to be able to think through all the things it could do, all the states it could get into.” The electromechanical interlockings that controlled train movements at railroad crossings, for instance, only had so many configurations; a few sheets of paper could describe the whole system, and you could run physical trains against each configuration to see how it would behave. Once you’d built and tested it, you knew exactly what you were dealing with.
Software is different. Just by editing the text in a file somewhere, the same hunk of silicon can become an autopilot or an inventory-control system. This flexibility is software’s miracle, and its curse. Because it can be changed cheaply, software is constantly changed; and because it’s unmoored from anything physical—a program that is a thousand times more complex than another takes up the same actual space—it tends to grow without bound. “The problem,” Leveson wrote in a book, “is that we are attempting to build systems that are beyond our ability to intellectually manage.”

Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing. Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”

This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”

The attempts now underway to change how we make software all seem to start with the same premise: Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.

Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code. When you press your foot down on your car’s accelerator, for instance, you’re no longer controlling anything directly; there’s no mechanical link from the pedal to the throttle. Instead, you’re issuing a command to a piece of software that decides how much air to give the engine. The car is a computer you can sit inside of. The steering wheel and pedals might as well be keyboard keys.

Like everything else, the car has been computerized to enable new features. When a program is in charge of the throttle and brakes, it can slow you down when you’re too close to another car, or precisely control the fuel injection to help you save on gas. When it controls the steering, it can keep you in your lane as you start to drift, or guide you into a parking space. You couldn’t build these features without code. If you tried, a car might weigh 40,000 pounds, an immovable mass of clockwork.

Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of code. But just because we can’t see the complexity doesn’t mean that it has gone away.The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning. As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.

What made programming so difficult was that it required you to think like a computer. The strangeness of it was in some sense more vivid in the early days of computing, when code took the form of literal ones and zeros. Anyone looking over a programmer’s shoulder as they pored over line after line like “100001010011” and “000010011110” would have seen just how alienated the programmer was from the actual problems they were trying to solve; it would have been impossible to tell whether they were trying to calculate artillery trajectories or simulate a game of tic-tac-toe. The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.

“The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work. “Software engineers like to provide all kinds of tools and stuff for coding errors,” she says, referring to IDEs. “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”

In September 2007, Jean Bookout was driving on the highway with her best friend in a Toyota Camry when the accelerator seemed to get stuck. When she took her foot off the pedal, the car didn’t slow down. She tried the brakes but they seemed to have lost their power. As she swerved toward an off-ramp going 50 miles per hour, she pulled the emergency brake. The car left a skid mark 150 feet long before running into an embankment by the side of the road. The passenger was killed. Bookout woke up in a hospital a month later.The incident was one of many in a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible. The National Highway Traffic Safety Administration enlisted software experts from NASA to perform an intensive review of Toyota’s code. After nearly 10 months, the NASA team hadn’t found evidence that software was the cause—but said they couldn’t prove it wasn’t.It was during litigation of the Bookout accident that someone finally found a convincing connection. Michael Barr, an expert witness for the plaintiff, had a team of software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around, what’s already there; eventually the code becomes impossible to follow, let alone to test exhaustively for flaws.

Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it. “You have software watching the software,” Barr testified. “If the software malfunctions and the same program or same app that is crashed is supposed to save the day, it can’t save the day because it is not working.”

Barr’s testimony made the case for the plaintiff, resulting in $3 million in damages for Bookout and her friend’s family. According to The New York Times, it was the first of many similar cases against Toyota to bring to trial problems with the electronic throttle-control system, and the first time Toyota was found responsible by a jury for an accident involving unintended acceleration. The parties decided to settle the case before punitive damages could be awarded. In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.

There will be more bad days for software. It’s important that we get better at making it, because if we don’t, and as software becomes more sophisticated and connected—as it takes control of more critical functions—those days could get worse.

The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little. There is a small but growing chorus that worries the status quo is unsustainable. “Even very good programmers are struggling to make sense of the systems that they are working with,” says Chris Granger, a software developer who worked as a lead at Microsoft on Visual Studio, an IDE that costs $1,199 a year and is used by nearly a third of all professional programmers. He told me that while he was at Microsoft, he arranged an end-to-end study of Visual Studio, the only one that had ever been done. For a month and a half, he watched behind a one-way mirror as people wrote code. “How do they use tools? How do they think?” he said. “How do they sit at the computer, do they touch the mouse, do they not touch the mouse? All these things that we have dogma around that we haven’t actually tested empirically.”

 

The findings surprised him. “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.

John Resig had been noticing the same thing among his students. Resig is a celebrated programmer of JavaScript—software he wrote powers over half of all websites—and a tech lead at the online-education site Khan Academy. In early 2012, he had been struggling with the site’s computer-science curriculum. Why was it so hard to learn to program? The essential problem seemed to be that code was so abstract. Writing software was not like making a bridge out of popsicle sticks, where you could see the sticks and touch the glue.To “make” a program, you typed words. When you wanted to change the behavior of the program, be it a game, or a website, or a simulation of physics, what you actually changed was text. So the students who did well—in fact the only ones who survived at all—were those who could step through that text one instruction at a time in their head, thinking the way a computer would, trying to keep track of every intermediate calculation. Resig, like Granger, started to wonder if it had to be that way. Computers had doubled in power every 18 months for the last 40 years. Why hadn’t programming changed?

The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.

Bret victor does not like to write code. “It sounds weird,” he says. “When I want to make a thing, especially when I want to create something in software, there’s this initial layer of disgust that I have to push through, where I’m not manipulating the thing that I want to make, I’m writing a bunch of text into a text editor.”

“There’s a pretty strong conviction that that’s the wrong way of doing things.”

Victor has the mien of David Foster Wallace, with a lightning intelligence that lingers beneath a patina of aw-shucks shyness. He is 40 years old, with traces of gray and a thin, undeliberate beard. His voice is gentle, mournful almost, but he wants to share what’s in his head, and when he gets on a roll he’ll seem to skip syllables, as though outrunning his own vocal machinery.

Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering, and then went on, after grad school at the University of California, Berkeley, to work at a company that develops music synthesizers. It was a problem perfectly matched to his dual personality: He could spend as much time thinking about the way a performer makes music with a keyboard—the way it becomes an extension of their hands—as he could thinking about the mathematics of digital signal processing.

By the time he gave the talk that made his name, the one that Resig and Granger saw in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.” That code now takes the form of letters on a screen in a language like C or Java (derivatives of Fortran and ALGOL), instead of a stack of cards with holes in it, doesn’t make it any less dead, any less indirect.

There is an analogy to word processing. It used to be that all you could see in a program for writing documents was the text itself, and to change the layout or font or margins, you had to write special “control codes,” or commands that would tell the computer that, for instance, “this part of the text should be in italics.” The trouble was that you couldn’t see the effect of those codes until you printed the document. It was hard to predict what you were going to get. You had to imagine how the codes were going to be interpreted by the computer—that is, you had to play computer in your head.

Then WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.” When you marked a passage as being in italics, the letters tilted right there on the screen. If you wanted to change the margin, you could drag a ruler at the top of the screen—and see the effect of that change. The document thereby came to feel like something real, something you could poke and prod at. Just by looking you could tell if you’d done something wrong. Control of a sophisticated system—the document’s layout and formatting engine—was made accessible to anyone who could click around on a page.Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.

And it was the proper job of programmers to ensure that someday they wouldn’t have to.There was precedent enough to suggest that this wasn’t a crazy idea.

Photoshop, for instance, puts powerful image-processing algorithms in the hands of people who might not even know what an algorithm is. It’s a complicated piece of software, but complicated in the way a good synth is complicated, with knobs and buttons and sliders that the user learns to play like an instrument. Squarespace, a company that is perhaps best known for advertising aggressively on podcasts, makes a tool that lets users build websites by pointing and clicking, instead of by writing code in HTML and CSS.It is powerful enough to do work that once would have been done by a professional web designer.
But those were just a handful of examples. The overwhelming reality was that when someone wanted to do something interesting with a computer, they had to write code. Victor, who is something of an idealist, saw this not so much as an opportunity but as a moral failing of programmers at large. His talk was a call to arms.At the heart of it was a series of demos that tried to show just how primitive the available tools were for various problems—circuit design, computer animation, debugging algorithms—and what better ones might look like.
His demos were virtuosic. The one that captured everyone’s imagination was, ironically enough, the one that on its face was the most trivial. It showed a split screen with a game that looked like Mario on one side and the code that controlled it on the other. As Victor changed the code, things in the game world changed: He decreased one number, the strength of gravity, and the Mario character floated; he increased another, the player’s speed, and Mario raced across the screen. Suppose you wanted to design a level where Mario, jumping and bouncing off of a turtle, would just make it into a small passageway. Game programmers were used to solving this kind of problem in two stages: First, you stared at your code—the code controlling how high Mario jumped, how fast he ran, how bouncy the turtle’s back was—and made some changes to it in your text editor, using your imagination to predict what effect they’d have.Then, you’d replay the game to see what actually happened.Victor wanted something more immediate. “If you have a process in time,” he said, referring to Mario’s path through the level, “and you want to see changes immediately, you have to map time to space.He hit a button that showed not just where Mario was right now, but where he would be at every moment in the future: a curve of shadow Marios stretching off into the far distance. What’s more, this projected path was reactive: When Victor changed the game’s parameters, now controlled by a quick drag of the mouse, the path’s shape changed. It was like having a god’s-eye view of the game. The whole problem had been reduced to playing with different parameters, as if adjusting levels on a stereo receiver, until you got Mario to thread the needle. With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”

When john resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns … [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.

Chris Granger, who had worked at Microsoft on Visual Studio, was likewise inspired. Within days of seeing a video of Victor’s talk, in January of 2012, he built a prototype of a new programming environment. Its key capability was that it would give you instant feedback on your program’s behavior. You’d see what your system was doing right next to the code that controlled it.

It was like taking off a blindfold. Granger called the project “Light Table.”

In April of 2012, he sought funding for Light Table on Kickstarter. In programming circles, it was a sensation. Within a month, the project raised more than $200,000. The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
But seeing the impact that his talk ended up having, Bret Victor was disillusioned. “A lot of those things seemed like misinterpretations of what I was saying,” he said later. He knew something was wrong when people began to invite him to conferences to talk about programming tools. “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.

In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface. Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.

Of course, to do that, you’d have to get programmers themselves on board. In a recent essay, Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.” Exciting work of this sort, in particular a class of tools for “model-based design,” was already underway, he wrote, and had been for years, but most programmers knew nothing about it.

“If you really look hard at all the industrial goods that you’ve got out there, that you’re using, that companies are using, the only non-industrial stuff that you have inside this is the code.

Eric Bantégnie is the founder of Esterel Technologies (now owned by ANSYS), a French company that makes tools for building safety-critical software. Like Victor, Bantégnie doesn’t think engineers should develop large systems by typing millions of lines of code into an IDE. “Nobody would build a car by hand,” he says. “Code is still, in many places, handicraft. When you’re crafting manually 10,000 lines of code, that’s okay. But you have systems that have 30 million lines of code, like an Airbus, or 100 million lines of code, like your Tesla or high-end cars—that’s becoming very, very complicated.”

Bantégnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules. If you were making the control system for an elevator, for instance, one rule might be that when the door is open, and someone presses the button for the lobby, you should close the door and start moving the car. In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.

It’s not quite Photoshop. The beauty of Photoshop, of course, is that the picture you’re manipulating on the screen is the final product. In model-based design, by contrast, the picture on your screen is more like a blueprint. Still, making software this way is qualitatively different than traditional programming. In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.

 
“Typically the main problem with software coding—and I’m a coder myself,” Bantégnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself. Too much is lost going from one to the other. The idea behind model-based design is to close the gap. The very same model is used both by system designers to express what they want and by the computer to automatically generate code.Of course, for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand.They have to make a program that turns these models into real code.And finally they have to prove that the generated code will always do what it’s supposed to. “We have benefited from fortunately 20 years of initial background work,” Bantégnie says.
Esterel Technologies, which was acquired by ANSYS in 2012, grew out of research begun in the 1980s by the French nuclear and aerospace industries, who worried that as safety-critical code ballooned in complexity, it was getting harder and harder to keep it free of bugs. “I started in 1988,” says Emmanuel Ledinot, the Head of Scientific Studies for Dassault Aviation, a French manufacturer of fighter jets and business aircraft. “At the time, I was working on military avionics systems. And the people in charge of integrating the systems, and debugging them, had noticed that the number of bugs was increasing.”
The 80s had seen a surge in the number of onboard computers on planes. Instead of a single flight computer, there were now dozens, each responsible for highly specialized tasks related to control, navigation, and communications. Coordinating these systems to fly the plane as data poured in from sensors and as pilots entered commands required a symphony of perfectly timed reactions. “The handling of these hundreds of and even thousands of possible events in the right order, at the right time,” Ledinot says, “was diagnosed as the main cause of the bug inflation.”Ledinot decided that writing such convoluted code by hand was no longer sustainable. It was too hard to understand what it was doing, and almost impossible to verify that it would work correctly. He went looking for something new. “You must understand that to change tools is extremely expensive in a process like this,” he said in a talk. “You don’t take this type of decision unless your back is against the wall.”

He began collaborating with Gerard Berry, a computer scientist at INRIA, the French computing-research center, on a tool called Esterel—a portmanteau of the French for “real-time.” The idea behind Esterel was that while traditional programming languages might be good for describing simple procedures that happened in a predetermined order—like a recipe—if you tried to use them in systems where lots of events could happen at nearly any time, in nearly any order—like in the cockpit of a plane—you inevitably got a mess. And a mess in control software was dangerous.

In a paper, Berry went as far as to predict that “low-level programming techniques will not remain acceptable for large safety-critical programs, since they make behavior understanding and analysis almost impracticable.”

Cartoonish figures interact with the world through code.

Esterel was designed to make the computer handle this complexity for you. That was the promise of the model-based approach: Instead of writing normal programming code, you created a model of the system’s behavior—in this case, a model focused on how individual events should be handled, how to prioritize events, which events depended on which others, and so on. The model becomes the detailed blueprint that the computer would use to do the actual programming Ledinot and Berry worked for nearly 10 years to get Esterel to the point where it could be used in production.

“It was in 2002 that we had the first operational software-modeling environment with automatic code generation,” Ledinot told me, “and the first embedded module in Rafale, the combat aircraft.”
Today, the ANSYS SCADE product family (for “safety-critical application development environment”) is used to generate code by companies in the aerospace and defense industries, in nuclear power plants, transit systems, heavy industry, and medical devices. “My initial dream was to have SCADE-generated code in every plane in the world,” Bantégnie, the founder of Esterel Technologies, says, “and we’re not very far off from that objective.” Nearly all safety-critical code on the Airbus A380, including the system controlling the plane’s flight surfaces, was generated with ANSYS SCADE products.Part of the draw for customers, especially in aviation, is that while it is possible to build highly reliable software by hand, it can be a Herculean effort. Ravi Shivappa, the VP of group software engineering at Meggitt PLC, an ANSYS customer which builds components for airplanes, like pneumatic fire detectors for engines, explains that traditional projects begin with a massive requirements document in English, which specifies everything the software should do. (A requirement might be something like, “When the pressure in this section rises above a threshold, open the safety valve, unless the manual-override switch is turned on.”) The problem with describing the requirements this way is that when you implement them in code, you have to painstakingly check that each one is satisfied. And when the customer changes the requirements, the code has to be changed, too, and tested extensively to make sure that nothing else was broken in the process.
The cost is compounded by exacting regulatory standards. The FAA is fanatical about software safety. The agency mandates that every requirement for a piece of safety-critical software be traceable to the lines of code that implement it, and vice versa. So every time a line of code changes, it must be retraced to the corresponding requirement in the design document, and you must be able to demonstrate that the code actually satisfies the requirement. The idea is that if something goes wrong, you’re able to figure out why; the practice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.

As Bantégnie explains, the beauty of having a computer turn your requirements into code, rather than a human, is that you can be sure—in fact you can mathematically prove—that the generated code actually satisfies those requirements. Much of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”

Still, most software, even in the safety-obsessed world of aviation, is made the old-fashioned way, with engineers writing their requirements in prose and programmers coding them up in a programming language like C. As Bret Victor made clear in his essay, model-based design is relatively unusual. “A lot of people in the FAA think code generation is magic, and hence call for greater scrutiny,” Shivappa told me.Most programmers feel the same way. They like code. At least they understand it. Tools that write your code for you and verify its correctness using the mathematics of “finite-state machines” and “recurrent systems” sound esoteric and hard to use, if not just too good to be true.It is a pattern that has played itself out before. Whenever programming has taken a step away from the writing of literal ones and zeros, the loudest objections have come from programmers. Margaret Hamilton, a celebrated software engineer on the Apollo missions—in fact the coiner of the phrase “software engineering”—told me that during her first year at the Draper lab at MIT, in 1964, she remembers a meeting where one faction was fighting the other about transitioning away from “some very low machine language,” as close to ones and zeros as you could get, to “assembly language.” “The people at the lowest level were fighting to keep it. And the arguments were so similar: ‘Well how do we know assembly language is going to do it right?’”
“Guys on one side, their faces got red, and they started screaming,” she said. She said she was “amazed how emotional they got.”

Emmanuel Ledinot, of Dassault Aviation, pointed out that when assembly language was itself phased out in favor of the programming languages still popular today, like C, it was the assembly programmers who were skeptical this time. No wonder, he said, that “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”

The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”

Which sounds almost like a joke, but for proponents of the model-based approach, it’s an important point: We already know how to make complex software reliable, but in so many places, we’re choosing not to. Why?

In 2011, Chris Newcombe had been working at Amazon for almost seven years, and had risen to be a principal engineer. He had worked on some of the company’s most critical systems, including the retail-product catalog and the infrastructure that managed every Kindle device in the world. He was a leader on the highly prized Amazon Web Services team, which maintains cloud servers for some of the web’s biggest properties, like Netflix, Pinterest, and Reddit. Before Amazon, he’d helped build the backbone of Steam, the world’s largest online-gaming service. He is one of those engineers whose work quietly keeps the internet running. The products he’d worked on were considered massive successes. But all he could think about was that buried deep in the designs of those systems were disasters waiting to happen.

“Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”

Newcombe was convinced that the algorithms behind truly critical systemssystems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all. “Few programmers write even a rough sketch of what their programs will do before they start coding.”

This is why he was so intrigued when, in the appendix of a paper he’d been reading, he came across a strange mixture of math and code—or what looked like code—that described an algorithm in something called “TLA+.The surprising part was that this description was said to be mathematically precise: An algorithm written in TLA+ could in principle be proven correct.

In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.

TLA+, which stands for “Temporal Logic of Actions,is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy (say, if you were programming an ATM, a constraint might be that you can never withdraw the same money twice from your checking account). TLA+ then exhaustively checks that your logic does, in fact, satisfy those constraints. If not, it will show you exactly how they could be violated.

The language was invented by Leslie Lamport, a Turing Award–winning computer scientist. With a big white beard and scruffy white hair, and kind eyes behind large glasses, Lamport looks like he might be one of the friendlier professors at the American Hogwarts. Now at Microsoft Research, he is known as one of the pioneers of the theory of “distributed systems,” which describes any computer system made of multiple parts that communicate with each other. Lamport’s work laid the foundation for many of the systems that power the modern web.

For Lamport, a major reason today’s software is so full of bugs is that programmers jump straight into writing code.

“Architects draw detailed plans before a brick is laid or a nail is hammered,” he wrote in an article. “But few programmers write even a rough sketch of what their programs will do before they start coding.” Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought.

“It really does constrain your ability to think when you’re thinking in terms of a programming language,” he says. Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think.

This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.

Newcombe and his colleagues at Amazon would go on to use TLA+ to find subtle, critical bugs in major systems, including bugs in the core algorithms behind S3, regarded as perhaps the most reliable storage engine in the world. It is now used widely at the company. In the tiny universe of people who had ever used TLA+, their success was not so unusual. An intern at Microsoft used TLA+ to catch a bug that could have caused every Xbox in the world to crash after four hours of use. Engineers at the European Space Agency used it to rewrite, with 10 times less code, the operating system of a probe that was the first to ever land softly on a comet. Intel uses it regularly to verify its chips.

But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols. For Lamport, this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”

Lamport sees this failure to think mathematically about what they’re doing as the problem of modern software development in a nutshell: The stakes keep rising, but programmers aren’t stepping up—they haven’t developed the chops required to handle increasingly complex problems. “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”

Newcombe isn’t so sure that it’s the programmer who is to blame. “I’ve heard from Leslie that he thinks programmers are afraid of math. I’ve found that programmers aren’t aware—or don’t believe—that math can help them handle complexity. Complexity is the biggest challenge for programmers.” The real problem in getting people to use TLA+, he said, was convincing them it wouldn’t be a waste of their time. Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.

Most programmers who took computer science in college have briefly encountered formal methods. Usually they’re demonstrated on something trivial, like a program that counts up from zero; the student’s job is to mathematically prove that the program does, in fact, count up from zero.

“I needed to change people’s perceptions on what formal methods were,” Newcombe told me. Even Lamport himself didn’t seem to fully grasp this point: Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.

For one thing, he said that when he was introducing colleagues at Amazon to TLA+ he would avoid telling them what it stood for, because he was afraid the name made it seem unnecessarily forbidding: “Temporal Logic of Actions” has exactly the kind of highfalutin ring to it that plays well in academia, but puts off most practicing programmers. He tried also not to use the terms “formal,” “verification,” or “proof,” which reminded programmers of tedious classroom exercises. Instead, he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.

He has since left Amazon for Oracle, where he’s been able to convince his new colleagues to give TLA+ a try. For him, using these tools is now a matter of responsibility. “We need to get better at this,” he said.

“I’m self-taught, been coding since I was nine, so my instincts were to start coding. That was my only—that was my way of thinking: You’d sketch something, try something, you’d organically evolve it.” In his view, this is what many programmers today still do. “They google, and they look on Stack Overflow” (a popular website where programmers answer each other’s technical questions) “and they get snippets of code to solve their tactical concern in this little function, and they glue it together, and iterate.”

“And that’s completely fine until you run smack into a real problem.”

In the summer of 2015, a pair of American security researchers, Charlie Miller and Chris Valasek, convinced that car manufacturers weren’t taking software flaws seriously enough, demonstrated that a 2014 Jeep Cherokee could be remotely controlled by hackers. They took advantage of the fact that the car’s entertainment system, which has a cellular connection (so that, for instance, you can start your car with your iPhone), was connected to more central systems, like the one that controls the windshield wipers, steering, acceleration, and brakes (so that, for instance, you can see guidelines on the rearview screen that respond as you turn the wheel). As proof of their attack, which they developed on nights and weekends, they hacked into Miller’s car while a journalist was driving it on the highway, and made it go haywire; the journalist, who knew what was coming, panicked when they cut the engines, forcing him to a slow crawl on a stretch of road with no shoulder to escape to.

Although they didn’t actually create one, they showed that it was possible to write a clever piece of software, a “vehicle worm,” that would use the onboard computer of a hacked Jeep Cherokee to scan for and hack others; had they wanted to, they could have had simultaneous access to a nationwide fleet of vulnerable cars and SUVs. (There were at least five Fiat Chrysler models affected, including the Jeep Cherokee.) One day they could have told them all to, say, suddenly veer left or cut the engines at high speed.

“We need to think about software differently,” Valasek told me. Car companies have long assembled their final product from parts made by hundreds of different suppliers. But where those parts were once purely mechanical, they now, as often as not, come with millions of lines of code. And while some of this code—for adaptive cruise control, for auto braking and lane assist—has indeed made cars safer (“The safety features on my Jeep have already saved me countless times,” says Miller), it has also created a level of complexity that is entirely new. And it has made possible a new kind of failure.

“There are lots of bugs in cars,” Gerard Berry, the French researcher behind Esterel, said in a talk. “It’s not like avionics—in avionics it’s taken very seriously. And it’s admitted that software is different from mechanics.” The automotive industry is perhaps among those that haven’t yet realized they are actually in the software business.

“We don’t in the automaker industry have a regulator for software safety that knows what it’s doing,” says Michael Barr, the software expert who testified in the Toyota case. NHTSA, he says, “has only limited software expertise. They’ve come at this from a mechanical history.” The same regulatory pressures that have made model-based design and code generation attractive to the aviation industry have been slower to come to car manufacturing. Emmanuel Ledinot, of Dassault Aviation, speculates that there might be economic reasons for the difference, too. Automakers simply can’t afford to increase the price of a component by even a few cents, since it is multiplied so many millionfold; the computers embedded in cars therefore have to be slimmed down to the bare minimum, with little room to run code that hasn’t been hand-tuned to be as lean as possible. “Introducing model-based software development was, I think, for the last decade, too costly for them.”

One suspects the incentives are changing. “I think the autonomous car might push them,” Ledinot told me—“ISO 26262 and the autonomous car might slowly push them to adopt this kind of approach on critical parts.” (ISO 26262 is a safety standard for cars published in 2011.) Barr said much the same thing: In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.

“Computing is fundamentally invisible,” Gerard Berry said in his talk. “When your tires are flat, you look at your tires, they are flat. When your software is broken, you look at your software, you see nothing.”

“So that’s a big problem.”

-30-

……………..      I’m sorry DaveSorry! https://goo.gl/images/PTbTmR

 

…What Facebook Did to American Democracy…

      ….and why was it so hard to see it coming?…………

The continental United States with the Facebook logo superimposed
Luchenko Yana / Shutterstock / Zak Bickel / The Atlantic

What Facebook Did to American Democracy

In the media world, as in so many other realms, there is a sharp discontinuity in the timeline: before the 2016 election, and after. Things we thought we understood—narratives, data, software, news events—have had to be reinterpreted in light of Donald Trump’s surprising win as well as the continuing questions about the role that misinformation and disinformation played in his election.

Tech journalists covering Facebook had a duty to cover what was happening before, during, and after the election. Reporters tried to see past their often liberal political orientations and the unprecedented actions of Donald Trump to see how 2016 was playing out on the internet. Every component of the chaotic digital campaign has been reported on, here at The Atlantic, and elsewhere: Facebook’s enormous distribution power for political information, rapacious partisanship reinforced by distinct media information spheres, the increasing scourge of “viral” hoaxes and other kinds of misinformation that could propagate through those networks, and the Russian information ops agency.
But no one delivered the synthesis that could have tied together all these disparate threads. It’s not that this hypothetical perfect story would have changed the outcome of the election. The real problem—for all political stripes—is understanding the set of conditions that led to Trump’s victory. The informational underpinnings of democracy have eroded, and no one has explained precisely how. We’ve known since at least 2012 that Facebook was a powerful, non-neutral force in electoral politics.
In that year, a combined University of California, San Diego and Facebook research team led by James Fowler published a study in Nature, which argued that Facebook’s “I Voted” button had driven a small but measurable increase in turnout, primarily among young people.Rebecca Rosen’s 2012 story, “Did Facebook Give Democrats the Upper Hand?” relied on new research from Fowler, et al., about the presidential election that year.
Again, the conclusion of their work was that Facebook’s get-out-the-vote message could have driven a substantial chunk of the increase in youth voter participation in the 2012 general election. Fowler told Rosen that it was “even possible that Facebook is completely responsible” for the youth voter increase.  And because a higher proportion of young people vote Democratic than the general population, the net effect of Facebook’s GOTV effort would have been to help the Dems.

The research showed that a small design change by Facebook could have electoral repercussions, especially with America’s electoral-college format in which a few hotly contested states have a disproportionate impact on the national outcome.  And the pro-liberal effect it implied became enshrined as an axiom of how campaign staffers, reporters, and academics viewed social media.

In June 2014, Harvard Law scholar Jonathan Zittrain wrote an essay in New Republic called, “Facebook Could Decide an Election Without Anyone Ever Finding Out,” in which he called attention to the possibility of Facebook selectively depressing voter turnout. (He also suggested that Facebook be seen as an “information fiduciary,” charged with certain special roles and responsibilities because it controls so much personal data.)In late 2014, The Daily Dot called attention to an obscure Facebook-produced case study on how strategists defeated a statewide measure in Florida by relentlessly focusing Facebook ads on Broward and Dade counties, Democratic strongholds. Working with a tiny budget that would have allowed them to send a single mailer to just 150,000 households, the digital-advertising firm Chong and Koster was able to obtain remarkable results. “Where the Facebook ads appeared, we did almost 20 percentage points better than where they didn’t,” testified a leader of the firm. “Within that area, the people who saw the ads were 17 percent more likely to vote our way than the people who didn’t. Within that group, the people who voted the way we wanted them to, when asked why, often cited the messages they learned from the Facebook ads.”In April 2016, Rob Meyer published “How Facebook Could Tilt the 2016 Election” after a company meeting in which some employees apparently put the stopping-Trump question to Mark Zuckerberg. Based on Fowler’s research, Meyer reimagined Zittrain’s hypothetical as a direct Facebook intervention to depress turnout among non-college graduates, who leaned Trump as a whole.
Facebook, of course, said it would never do such a thing. “Voting is a core value of democracy and we believe that supporting civic participation is an important contribution we can make to the community,” a spokesperson said. “We as a company are neutral—we have not and will not use our products in a way that attempts to influence how people vote.”They wouldn’t do it intentionally, at least.As all these examples show, though, the potential for Facebook to have an impact on an election was clear for at least half a decade before Donald Trump was elected. But rather than focusing specifically on the integrity of elections, most writers—myself included, some observers like Sasha Issenberg, Zeynep Tufekci, and Daniel Kreiss excepted—bundled electoral problems inside other, broader concerns like privacy, surveillance, tech ideology, media-industry competition, or the psychological effects of social media.

The same was true even of people inside Facebook. “If you’d come to me in 2012, when the last presidential election was raging and we were cooking up ever more complicated ways to monetize Facebook data, and told me that Russian agents in the Kremlin’s employ would be buying Facebook ads to subvert American democracy, I’d have asked where your tin-foil hat was,” wrote Antonio García Martínez, who managed ad targeting for Facebook back then. “And yet, now we live in that otherworldly political reality.”

Not to excuse us, but this was back on the Old Earth, too, when electoral politics was not the thing that every single person talked about all the time. There were other important dynamics to Facebook’s growing power that needed to be covered. Facebook’s draw is its ability to give you what you want. Like a page, get more of that page’s posts; like a story, get more stories like that; interact with a person, get more of their updates. The way Facebook determines the ranking of the News Feed is the probability that you’ll like, comment on, or share a story. Shares are worth more than comments, which are both worth more than likes, but in all cases, the more likely you are to interact with a post, the higher up it will show in your News Feed. Two thousand kinds of data (or “features” in the industry parlance) get smelted in Facebook’s machine-learning system to make those predictions.
What’s crucial to understand is that, from the system’s perspective, success is correctly predicting what you’ll like, comment on, or share. That’s what matters. People call this “engagement.” There are other factors, as Slate’s Will Oremus noted in this rare story about the News Feed ranking team.  But who knows how much weight they actually receive and for how long as the system evolves. For example, one change that Facebook highlighted to Oremus in early 2016—taking into account how long people look at a story, even if they don’t click it—was subsequently dismissed by Lars Backstrom, the VP of engineering in charge of News Feed ranking, as a “noisy” signal that’s also “biased in a few ways” making it “hard to use” in a May 2017 technical talk.
Facebook’s engineers do not want to introduce noise into the system. Because the News Feed, this machine for generating engagement, is Facebook’s most important technical system. Their success predicting what you’ll like is why users spend an average of more than 50 minutes a day on the site, and why even the former creator of the “like” button worries about how well the site captures attention. News Feed works really well.

But as far as “personalized newspapers” go, this one’s editorial sensibilities are limited. Most people are far less likely to engage with viewpoints that they find confusing, annoying, incorrect, or abhorrent. And this is true not just in politics, but the broader culture.

That this could be a problem was apparent to many. Eli Pariser’s The Filter Bubble, which came out in the summer of 2011, became the most widely cited distillation of the effects Facebook and other internet platforms could have on public discourse.

Pariser began the book research when he noticed conservative people, whom he’d befriended on the platform despite his left-leaning politics, had disappeared from his News Feed. “I was still clicking my progressive friends’ links more than my conservative friends’— and links to the latest Lady Gaga videos more than either,” he wrote. “So no conservative links for me.”

Through the book, he traces the many potential problems that the “personalization” of media might bring. Most germane to this discussion, he raised the point that if every one of the billion News Feeds is different, how can anyone understand what other people are seeing and responding to?
“The most serious political problem posed by filter bubbles is that they make it increasingly difficult to have a public argument. As the number of different segments and messages increases, it becomes harder and harder for the campaigns to track who’s saying what to whom,” Pariser wrote. “How does a [political] campaign know what its opponent is saying if ads are only targeted to white Jewish men between 28 and 34 who have expressed a fondness for U2 on Facebook and who donated to Barack Obama’s campaign?”
This did, indeed, become an enormous problem. When I was editor in chief of Fusion, we set about trying to track the “digital campaign” with several dedicated people. What we quickly realized was that there was both too much data—the noisiness of all the different posts by the various candidates and their associates—as well as too little. Targeting made tracking the actual messaging that the campaigns were paying for impossible to track. On Facebook, the campaigns could show ads only to the people they targeted. We couldn’t actually see the messages that were actually reaching people in battleground areas. From the outside, it was a technical impossibility to know what ads were running on Facebook, one that the company had fought to keep intact.

Pariser suggests in his book, “one simple solution to this problem would simply be to require campaigns to immediately disclose all of their online advertising materials and to whom each ad is targeted.” Which could happen in future campaigns.

Imagine if this had happened in 2016. If there were data sets of all the ads that the campaigns and others had run, we’d know a lot more about what actually happened last year. The Filter Bubble is obviously prescient work, but there was one thing that Pariser and most other people did not foresee. And that’s that Facebook became completely dominant as a media distributor.

About two years after Pariser published his book, Facebook took over the news-media ecosystem. They’ve never publicly admitted it, but in late 2013, they began to serve ads inviting users to “like” media pages. This caused a massive increase in the amount of traffic that Facebook sent to media companies. At The Atlantic and other publishers across the media landscape, it was like a tide was carrying us to new traffic records. Without hiring anyone else, without changing strategy or tactics, without publishing more, suddenly everything was easier.
While traffic to The Atlantic from Facebook.com increased, at the time, most of the new traffic did not look like it was coming from Facebook within The Atlantic’s analytics. It showed up as “direct/bookmarked” or some variation, depending on the software. It looked like what I called “dark social” back in 2012. But as BuzzFeed’s Charlie Warzel pointed out at the time, and as I came to believe, it was primarily Facebook traffic in disguise. Between August and October of 2013, BuzzFeed’s “partner network” of hundreds of websites saw a jump in traffic from Facebook of 69 percent.At The Atlantic, we ran a series of experiments that showed, pretty definitively from our perspective, that most of the stuff that looked like “dark social” was, in fact, traffic coming from within Facebook’s mobile app.
Across the landscape, it began to dawn on people who thought about these kinds of things: Damn, Facebook owns us. They had taken over media distribution.
Why?  This is a best guess, proffered by Robinson Meyer as it was happening: Facebook wanted to crush Twitter, which had drawn a disproportionate share of media and media-figure attention. Just as Instagram borrowed Snapchat’s “Stories” to help crush the site’s growth, Facebook decided it needed to own “news” to take the wind out of the newly IPO’d Twitter.

The first sign that this new system had some kinks came with “Upworthy-style” headlines. (And you’ll never guess what happened next!) Things didn’t just go kind of viral, they went ViralNova, a site which, like Upworthy itself, Facebook eventually smacked down. Many of the new sites had, like Upworthy, which was cofounded by Pariser, a progressive bent.

Less noticed was that a right-wing media was developing in opposition to and alongside these left-leaning sites. “By 2014, the outlines of the Facebook-native hard-right voice and grievance spectrum were there,” The New York Times’ media and tech writer John Herrman told me, “and I tricked myself into thinking they were a reaction/counterpart to the wave of soft progressive/inspirational content that had just crested. It ended up a Reaction in a much bigger and destabilizing sense.”

The other sign of algorithmic trouble was the wild swings that Facebook Video underwent. In the early days, just about any old video was likely to generate many, many, many views. The numbers were insane in the early days. Just as an example, a Fortune article noted that BuzzFeed’s video views “grew 80-fold in a year, reaching more than 500 million in April.” Suddenly, all kinds of video—good, bad, and ugly—were doing 1-2-3 million views.

As with news, Facebook’s video push was a direct assault on a competitor, YouTube. Videos changed the dynamics of the News Feed for individuals, for media companies, and for anyone trying to understand what the hell was going on. Individuals were suddenly inundated with video. Media companies, despite no business model, were forced to crank out video somehow or risk their pages/brands losing relevance as video posts crowded others out. And on top of all that, scholars and industry observers were used to looking at what was happening in articles to understand how information was flowing.  Now, by far the most viewed media objects on Facebook, and therefore on the internet, were videos without transcripts or centralized repositories. In the early days, many successful videos were just “freebooted” (i.e., stolen) videos from other places or reposts. All of which served to confuse and obfuscate the transport mechanisms for information and ideas on Facebook.

Through this messy, chaotic, dynamic situation, a new media rose up through the Facebook burst to occupy the big filter bubbles. On the right, Breitbart is the center of a new conservative network. A study of 1.25 million election news articles found “a right-wing media network anchored around Breitbart developed as a distinct and insulated media system, using social media as a backbone to transmit a hyper-partisan perspective to the world.”

Breitbart, of course, also lent Steve Bannon, its chief, to the Trump campaign, creating another feedback loop between the candidate and a rabid partisan press. Through 2015, Breitbart went from a medium-sized site with a small Facebook page of 100,000 likes into a powerful force shaping the election with almost 1.5 million likes. In the key metric for Facebook’s News Feed, its posts got 886,000 interactions from Facebook users in January.
By July, Breitbart had surpassed The New York Times’ main account in interactions. By December, it was doing 10 million interactions per month, about 50 percent of Fox News, which had 11.5 million likes on its main page. Breitbart’s audience was hyper-engaged. There is no precise equivalent to the Breitbart phenomenon on the left. Rather the big news organizations are classified as center-left, basically, with fringier left-wing sites showing far smaller followings than Breitbart on the right.
And this new, hyperpartisan media created the perfect conditions for another dynamic that influenced the 2016 election, the rise of fake news.
                                                                     
In a December 2015 article for BuzzFeed, Joseph Bernstein argued that “the dark forces of the internet became a counterculture.”  He called it “Chanterculture” after the trolls who gathered at the meme-creating, often-racist 4chan message board. Others ended up calling it the “alt-right.”  This culture combined a bunch of people who loved to perpetuate hoaxes with angry Gamergaters with “free-speech” advocates like Milo Yiannopoulos with honest-to-God neo-Nazis and white supremacists.
And these people loved Donald Trump.
“This year Chanterculture found its true hero, who makes it plain that what we’re seeing is a genuine movement: the current master of American resentment, Donald Trump,” Bernstein wrote. “Everywhere you look on ‘politically incorrect’ subforums and random chans, he looms.”  When you combine hyper-partisan media with a group of people who love to clown “normies,” you end up with things like Pizzagate, a patently ridiculous and widely debunked conspiracy theory that held there was a child-pedophilia ring linked to Hillary Clinton somehow. It was just the most bizarre thing in the entire world. And many of the figures in Bernstein’s story were all over it, including several who the current president has consorted with on social media. But Pizzagate was but the most Pynchonian of all the crazy misinformation and hoaxes that spread in the run-up to the election. 
BuzzFeed, deeply attuned to the flows of the social web, was all over the story through reporter Craig Silverman. His best-known analysis happened after the election, when he showed that “in the final three months of the U.S. presidential campaign, the top-performing fake election-news stories on Facebook generated more engagement than the top stories from major news outlets such as The New York Times, The Washington Post, The Huffington Post, NBC News, and others.”
But he also tracked fake news before the election, as did other outlets such as The Washington Post, including showing that Facebook’s “Trending” algorithm regularly promoted fake news. By September of 2016, even the Pope himself was talking about fake news, by which we mean actual hoaxes or lies perpetuated by a variety of actors.

The longevity of Snopes shows that hoaxes are nothing new to the internet. Already in January 2015, Robinson Meyer reported about how Facebook was “cracking down on the fake news stories that plague News Feeds everywhere.”

What made the election cycle different was that all of these changes to the information ecosystem had made it possible to develop weird businesses around fake news. Some random website posting aggregated news about the election could not drive a lot of traffic. But some random website announcing that the Pope had endorsed Donald Trump definitely could. The fake news generated a ton of engagement, which meant that it spread far and wide.A few days before the election Silverman and fellow BuzzFeed contributor Lawrence Alexander traced 100 pro–Donald Trump sites to a town of 45,000 in Macedonia.
Some teens there realized they could make money off the election, and just like that, became a node in the information network that helped Trump beat Clinton.Whatever weird thing you imagine might happen, something weirder probably did happen. Reporters tried to keep up, but it was too strange. As Max Read put it in New York Magazine, Facebook is “like a four-dimensional object, we catch slices of it when it passes through the three-dimensional world we recognize.” No one can quite wrap their heads around what this thing has become, or all the things this thing has become.“
Not even President-Pope-Viceroy Zuckerberg himself seemed prepared for the role Facebook has played in global politics this past year,” Read wrote. And we haven’t even gotten to the Russians.Russia’s disinformation campaigns are well known.
During his reporting for a story in The New York Times Magazine,  Adrian Chen sat across the street from the headquarters of the Internet Research Agency, watching workaday Russian agents/internet trolls head inside. He heard how the place had “industrialized the art of trolling” from a former employee. “Management was obsessed with statistics—page views, number of posts, a blog’s place on LiveJournal’s traffic charts—and team leaders compelled hard work through a system of bonuses and fines,” he wrote.  Of course they wanted to maximize engagement, too!
 
There were reports that Russian trolls were commenting on American news sites. There were many, many reports of Russia’s propaganda offensive in Ukraine. Ukrainian journalists run a website dedicated to cataloging these disinformation attempts called StopFake. It has hundreds of posts reaching back into 2014.

A Guardian reporter who looked into Russian military doctrine around information war found a handbook that described how it might work. “The deployment of information weapons, [the book] suggests, ‘acts like an invisible radiation’ upon its targets: ‘The population doesn’t even feel it is being acted upon. So the state doesn’t switch on its self-defense mechanisms,’” wrote Peter Pomerantsev.

As more details about the Russian disinformation campaign come to the surface through Facebook’s continued digging, it’s fair to say that it’s not just the state that did not switch on its self-defense mechanisms. The influence campaign just happened on Facebook without anyone noticing.

As many people have noted, the 3,000 ads that have been linked to Russia are a drop in the bucket, even if they did reach millions of people. The real game is simply that Russian operatives created pages that reached people “organically,” as the saying goes. Jonathan Albright, research director of the Tow Center for Digital Journalism at Columbia University, pulled data on the six publicly known Russia-linked Facebook pages. He found that their posts had been shared 340 million times. And those were six of 470 pages that Facebook has linked to Russian operatives. You’re probably talking billions of shares, with who knows how many views, and with what kind of specific targeting. 

The Russians are good at engagement!  Yet, before the U.S. election, even after Hillary Clinton and intelligence agencies fingered Russian intelligence meddling in the election, even after news reports suggested that a disinformation campaign was afoot, nothing about the actual operations on Facebook came out. In the aftermath of these discoveries, three Facebook security researchers, Jen Weedon, William Nuland, and Alex Stamos, released a white paper called Information Operations and Facebook.  “We have had to expand our security focus from traditional abusive behavior, such as account hacking, malware, spam, and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people,” they wrote.

One key theme of the paper is that they were used to dealing with economic actors, who responded to costs and incentives. When it comes to Russian operatives paid to Facebook, those constraints no longer hold. “The area of information operations does provide a unique challenge,” they wrote, “in that those sponsoring such operations are often not constrained by per-unit economic realities in the same way as spammers and click fraudsters, which increases the complexity of deterrence.” They were not expecting that.

Add everything up. The chaos of a billion-person platform that competitively dominated media distribution. The known electoral efficacy of Facebook. The wild fake news and misinformation rampaging across the internet generally and Facebook specifically.  The Russian info operations.  All of these things were known.

And yet no one could quite put it all together: The dominant social network had altered the information and persuasion environment of the election beyond recognition while taking a very big chunk of the estimated $1.4 billion worth of digital advertising purchased during the election. There were hundreds of millions of dollars of dark ads doing their work. Fake news all over the place.
Macedonian teens campaigning for Trump. Ragingly partisan media infospheres serving up only the news you wanted to hear.
Who could believe anything? What room was there for policy positions when all this stuff was eating up News Feed space? Who the hell knew what was going on? As late as August 20, 2016, the The Washington Post could say this of the campaigns:

Hillary Clinton is running arguably the most digital presidential campaign in U.S. history. Donald Trump is running one of the most analog campaigns in recent memory. The Clinton team is bent on finding more effective ways to identify supporters and ensure they cast ballots; Trump is, famously and unapologetically, sticking to a 1980s-era focus on courting attention and voters via television.

Just a week earlier, Trump’s campaign had hired Cambridge Analytica.

Soon, they’d ramped up to $70 million a month in Facebook advertising spending. And the next thing you knew, Brad Parscale, Trump’s digital director, is doing the postmortem rounds talking up his win.

“These social platforms are all invented by very liberal people on the west and east coasts,” Parscale said. “And we figure out how to use it to push conservative values. I don’t think they thought that would ever happen.”And that was part of the media’s problem, too.
Before Trump’s election, the impact of internet technology generally and Facebook specifically was seen as favoring Democrats.
Even a TechCrunch critique of Rosen’s 2012 article about Facebook’s electoral power argued, “the internet inherently advantages liberals because, on average, their greater psychological embrace of disruption leads to more innovation (after all, nearly every major digital breakthrough, from online fundraising to the use of big data, was pioneered by Democrats).” Certainly, the Obama tech team that I profiled in 2012 thought this was the case.
Of course, social media would benefit the (youthful, diverse, internet-savvy) left.  And the political bent of just about all Silicon Valley companies runs Democratic.  For all the talk about Facebook employees embedding with the Trump campaign, the former CEO of Google, Eric Schmidt, sat with the Obama tech team on Election Day 2012.
In June 2015, The New York Times ran an article about Republicans trying to ramp up their digital campaigns that began like this: “The criticism after the 2012 presidential election was swift and harsh: Democrats were light-years ahead of Republicans when it came to digital strategy and tactics, and Republicans had serious work to do on the technology front if they ever hoped to win back the White House.”

University of North Carolina journalism professor Daniel Kreiss wrote a whole (good) book, Prototype Politics, showing that Democrats had an incredible personnel advantage. Drawing on an innovative data set of the professional careers of 629 staffers working in technology on presidential campaigns from 2004 to 2012 and data from interviews with more than 60 party and campaign staffers,” Kriess wrote, “the book details how and explains why the Democrats have invested more in technology, attracted staffers with specialized expertise to work in electoral politics, and founded an array of firms and organizations to diffuse technological innovations down ballot and across election cycles.”

Which is to say: It’s not that no journalists, internet-focused lawyers, or technologists saw Facebook’s looming electoral presence—it was undeniable—but all the evidence pointed to the structural change benefitting Democrats. And let’s just state the obvious: Most reporters and professors are probably about as liberal as your standard Silicon Valley technologist, so this conclusion fit into the comfort zone of those in the field. By late October, the role that Facebook might be playing in the Trump campaign—and more broadly—was emerging.
Joshua Green and Issenberg reported a long feature on the data operation then in motion. The Trump campaign was working to suppress “idealistic white liberals, young women, and African Americans,” and they’d be doing it with targeted, “dark” Facebook ads. These ads are only visible to the buyer, the ad recipients, and Facebook. No one who hasn’t been targeted by then can see them. How was anyone supposed to know what was going on, when the key campaign terrain was literally invisible to outside observers? Steve Bannon was confident in the operation. “I wouldn’t have come aboard, even for Trump, if I hadn’t known they were building this massive Facebook and data engine,” Bannon told them. “Facebook is what propelled Breitbart to a massive audience. We know its power.”

Issenberg and Green called it “an odd gambit” which had “no scientific basis.” Then again, Trump’s whole campaign had seemed like an odd gambit with no scientific basis. The conventional wisdom was that Trump was going to lose and lose badly. In the days before the election, The Huffington Post’s data team had Clinton’s election probability at 98.3 percent. A member of the team, Ryan Grim, went after Nate Silver for his more conservative probability of 64.7 percent, accusing him of skewing his data for “punditry” reasons. Grim ended his post on the topic, “If you want to put your faith in the numbers, you can relax. She’s got this.”

 
Narrator: She did not have this. But the point isn’t that a Republican beat a Democrat. The point is that the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.
In the middle of the summer of the election, the former Facebook ad-targeting product manager, Antonio García Martínez, released an autobiography called Chaos Monkeys. He called his colleagues “chaos monkeys,” messing with industry after industry in their company-creating fervor. “The question for society,” he wrote, “is whether it can survive these entrepreneurial chaos monkeys intact, and at what human cost.”
This is the real epitaph of the election.The information systems that people use to process news have been rerouted through Facebook, and in the process, mostly broken and hidden from view. It wasn’t just liberal bias that kept the media from putting everything together. Much of the hundreds of millions of dollars that was spent during the election cycle came in the form of “dark ads.
”The truth is that while many reporters knew some things that were going on on Facebook, no one knew everything that was going on on Facebook, not even Facebook.
And so, during the most significant shift in the technology of politics since the television, the first draft of history is filled with undecipherable whorls and empty pages.
Meanwhile, the 2018 midterms loom.
Update: After publication, Adam Mosseri, head of News Feed, sent an email describing some of the work that Facebook is doing in response to the problems during the election.
They include new software and processes “to stop the spread of misinformation, click-bait and other problematic content on Facebook.”
“The truth is we’ve learned things since the election, and we take our responsibility to protect the community of people who use Facebook seriously.  As a result, we’ve launched a company-wide effort to
improve the integrity of information on our service,” he wrote. “It’s already translated into new products, new protections, and the commitment of thousands of new people to enforce our policies and standards… We know there is a lot more work to do, but I’ve never seen this company more engaged on a single challenge since I  joined almost 10 years ago.”

 

-30-

…………this one should have legs…….comments should abound….or not…….

 

 

 

%d bloggers like this: