Cleveland Browns

Mexico is an interesting case in that it showcases the diminishing returns associated with medical spending. When I've captured my adversary and he says, "Look, before you kill me, will you at least tell me what this is all about? Maybe even orthogonal one, instead of mere distinct. My first position as a professor of chemistry was at Penn State; my older son started going to school in State College, and my younger son was born there. How rich do you have to be to be able to legally move, inconvenience, or detain people with your bodyguards if they look vaguely threatening? Or how about animagus McGonagall turning to and from a cat in Book 1?


Search form

In , Modell announced he was relocating the Browns to Baltimore, sowing a mix of outrage and bitterness among Cleveland's dedicated fan base. Negotiations and legal battles led to an agreement where Modell was allowed to move the team, but Cleveland kept the Browns' name, colors and history. After three years of suspension while Cleveland Stadium was demolished and Cleveland Browns Stadium built in its place, the Browns started play again in under new owner Al Lerner.

The Browns struggled throughout the s and s, posting a record of 88— The Browns have only posted two winning seasons and one playoff appearance since returning to the NFL.

The team's struggles have been magnified since , when the Lerner family sold the team to businessman Jimmy Haslam. In six seasons under the Haslam ownership, the Browns went through four head coaches and four general managers, none of whom had found success. In and under head coach Hue Jackson , the Browns went 1— The Browns are the only National Football League team without a helmet logo.

The logoless helmet serves as the Browns' official logo. The organization has used several promotional logos throughout the years; players' numbers were painted on the helmets from the to ; and an unused "CB" logo [15] was created in , [16] But for much of their history, the Browns' helmets have been an unadorned burnt orange color with a top stripe of dark brown officially called " seal brown " divided by a white stripe.

The team has had various promotional logos throughout the years, such as the " Brownie Elf " mascot or a Brown "B" in a white football. While Art Modell did away with the Brownie Elf in the mids, believing it to be too childish, its use has been revived under the current ownership. The popularity of the Dawg Pound section at First Energy Stadium has led to a brown and orange dog being used for various Browns functions. But overall, the orange, logo-less helmet continues as the primary trademark of the Cleveland Browns.

On February 24, , the team unveiled its new logos and word marks, the only differences being minor color changes to the helmet with the helmet design remaining largely as is. The original designs of the jerseys, pants, and socks remained mostly the same, but the helmets went through many significant revisions throughout the years.

The Browns uniforms saw their first massive change prior to the season. Brown officially " seal brown " with orange colored numbers and writing, and an orange-white-orange stripe sequence on the sleeves. Orange with white numerals and writing, and a brown-white-brown stripe sequence.

White - white pants with a brown-orange-brown stripes. Orange - Orange pants with a brown-white-brown stripe sequence. Solid white — ; solid white for day games and solid orange for night games — ; orange with a single white stripe — ; orange with a single white stripe and brown numerals on the sides — ; orange with a brown-white-brown stripe sequence and brown numerals on the sides ; orange with a brown-white-brown stripe sequence — and —present.

Until recently, when more NFL teams have started to wear white at home at least once a season, the Browns were the only non- subtropical team north of the Mason-Dixon line to wear white at home on a regular basis. Numerals called "TV numbers" first appeared on the jersey sleeves in Over the years, there have been minor revisions to the sleeve stripes, the first occurring in brown jerseys worn in early season and white and brown jerseys when stripes began to be silk screened onto the sleeves and separated from each other to prevent color bleeding.

However, the basic five-stripe sequence has remained intact with the exception of the season. A recent revision was the addition of the initials "AL" to honor team owner Al Lerner who died in ; this was removed in upon Jimmy Haslam assuming ownership of the team. Orange pants with a brown-white-brown stripe sequence were worn from to and become symbolic of the "Kardiac Kids" era. The orange pants were worn again occasionally in and Other than the helmet, the uniform was completely redesigned for the season.

New striping patterns appeared on the white jerseys, brown jerseys and pants. Solid brown socks were worn with brown jerseys and solid orange socks were worn with white jerseys. Brown numerals on the white jerseys were outlined in orange. White numerals on the brown jerseys were double outlined in brown and orange.

Orange numerals double outlined in brown and white appeared briefly on the brown jerseys in one pre-season game. It remained that way until In , the expansion Browns adopted the traditional design with two exceptions: Experimentation with the uniform design began in An alternate orange jersey was introduced that season as the NFL encouraged teams to adopt a third jersey, and a major design change was made when solid brown socks appeared for the first time since and were used with white, brown and orange jerseys.

Other than , striped socks matching the jersey stripes had been a signature design element in the team's traditional uniform. The white striped socks appeared occasionally with the white jerseys in — and Experimentation continued in and when the traditional orange-brown-orange stripes on the white pants were replaced by two variations of a brown-orange-brown sequence, one in which the stripes were joined worn with white jerseys and the other in which they were separated by white worn with brown jerseys.

The joined sequence was used exclusively with both jerseys in In , the traditional orange-brown-orange sequence returned. Additionally in , the team reverted to an older uniform style, featuring gray face masks; the original stripe pattern on the brown jersey sleeves The white jersey has had that sleeve stripe pattern on a consistent basis since the season.

The Browns wore brown pants for the first time in team history on August 18, , preseason game against the New York Giants. The pants contain no stripes or markings. The team had the brown pants created as an option for their away uniform when they integrated the gray facemask in The Browns chose to wear white at home for the season, and wound up wearing white for all 16 games as when they were on the road, the home team would wear their darker colored uniform.

The Browns brought back the brown pants in their home game against the Buffalo Bills on October 3, on Thursday Night Football , pairing them with the brown jerseys. It marked the first time the team wore an all-brown combination in team history. On April 14, , the Cleveland Browns unveiled their new uniform combinations, consisting of the team's colors of orange, brown and white. Former long-time veteran placekicker and fan favorite, Phil Dawson , signed with the 49ers in , along with backup quarterback Colt McCoy.

Often called the "Turnpike Rivalry", [28] the Browns' main rival has long been the Pittsburgh Steelers. Though the Browns dominated this rivalry early in the series winning the first eight matchups , the Steelers currently have the all-time edge 74—58, making it the oldest rivalry in the AFC. Former Browns owner Art Modell scheduled home games against the Steelers on Saturday night from to to help fuel the rivalry. Though the rivalry has cooled in Pittsburgh due to the Modell move as well as the Browns having a 6—33 record against the Steelers since returning to the league in , including one playoff loss , the Steelers are still top rival for Cleveland.

Originally conceived due to the personal animosity between Paul Brown and Art Modell , the "Battle of Ohio" between the Browns and the Cincinnati Bengals has been fueled by the sociocultural differences between Cincinnati and Cleveland, a shared history between the two teams, and even similar team colors, since Brown used the exact shade of orange for the Bengals that he used for the Browns.

Though this has changed since then, as the Bengals now use a brighter shade of orange. The rivalry has also produced two of the eight highest-scoring games in NFL history. Cincinnati has the all-time edge 50—39, having won the majority of games against the Browns since they returned to the NFL in 25 wins for Cincinnati and 12 wins for Cleveland. Created as a result of the Cleveland Browns relocation controversy , the rivalry between the Browns and Ravens was more directed at Art Modell than the team itself, and is simply considered a divisional game in Baltimore.

Unlike the other two rivalries, this one is more lopsided: Additionally, this matchup is more bitter for Cleveland than the others due to the fact that the draft picks for to resulted in the rosters that won the Super Bowl for the Ravens in Had the Browns stayed in Cleveland, these teams drafted by general manager Ozzie Newsome might have given the Browns the title after a year drought.

The Lions won three of those championships, while the Browns won one. This was arguably one of the NFL's best rivalries in the s. From to , the two teams played an annual preseason game known as the "Great Lakes Classic". The Bills rivalry had its roots back to the days of the AAFC, when there was a team from Buffalo with the same name in that league.

After the current incarnation of the Bills joined the NFL, the Browns and Bills have played each other from time to time.

Though the Browns and Bills are in different AFC divisions, a mellow rivalry has since developed between the teams due to the similarities between Buffalo and Cleveland and shared misfortune between the teams. Despite this "rivalry" being known for ugly games such as a Browns win in which Browns quarterback Derek Anderson only completed 2 of 17 passes, [31] there have been some competitive moments between the Bills and Browns as well, such as a playoff game in and two games with playoff-implications in and The Colts rivalry was hot in the s.

The Browns also beat the Indianapolis Colts in a divisional playoff game. The Browns had a brief rivalry with the Broncos that arose from three AFC championship games from to Denver took a 21—3 lead, but Browns' quarterback Bernie Kosar threw four touchdown passes to tie the game at 31—31 halfway through the 4th quarter. After a long drive, John Elway threw a yard touchdown pass to running back Sammy Winder to give Denver a 38—31 lead. Cleveland advanced to Denver's 8-yard line with 1: The Broncos recovered it, gave Cleveland an intentional safety, and went on to win 38— The study, while not scientific , was largely based on fan loyalty during winning and losing seasons, attendance at games, and challenges confronting fans such as inclement weather or long-term poor performance of their team.

Perhaps the most visible Browns fans are those that can be found in the Dawg Pound. Originally the name for the bleacher section located in the open east end of old Cleveland Municipal Stadium , the current incarnation is likewise located in the east end of FirstEnergy Stadium and still features hundreds of orange and brown clad fans sporting various canine-related paraphernalia.

The fans adopted that name in after members of the Browns defense used it to describe the team's defense. Retired cornerback Hanford Dixon , who played his entire career for the Browns — , is credited with naming the Cleveland Browns defense 'The Dawgs' in the mids.

Dixon and teammates Frank Minnifield and Eddie Johnson would bark at each other and to the fans in the bleachers at the Cleveland Stadium to fire them up. It was from Dixon's naming that the Dawg Pound subsequently took its title. The organization has approximately , members [38] and Browns Backers clubs can be found in every major city in the United States, and in a number of military bases throughout the world, with the largest club being in Phoenix, Arizona.

This has raised interest in England and strengthened the link between the two sporting clubs. The Cleveland Browns were the favorite team of Elvis Presley. The Cleveland Browns have the fourth largest number of players enshrined in the Pro Football Hall of Fame with a total of 16 enshrined players elected based on their performance with the Browns, and eight more players or coaches elected who spent at least one year with the Browns franchise.

Otto Graham was the first Browns player to be enshrined as a member of the class of , and the most recent Browns player to be included in the Pro Football Hall of Fame is Gene Hickerson , who was a member of the class of All of the Browns' Pro Football Hall of Fame inductees thus far have been from the pre incarnation; no members of the Hall of Fame played for the Browns after The Cleveland Browns legends program honors former Browns who made noteworthy contributions to the history of the franchise.

In addition to all the Hall of Famers listed above, the Legends list includes: From to , number 19 was unofficially retired for Bernie Kosar aside from Frisman Jackson briefly wearing it in , later changing it due to fan outcry over the number being used.

In , Miles Austin asked for and received permission from Kosar to wear 19, after which 19 returned to regular circulation for the Browns. Beginning in , the Browns established a Ring of Honor, honoring the greats from the past by having their names displayed around the upper deck of FirstEnergy Stadium.

The inaugural class in the Browns Ring of Honor was unveiled during the home opener on September 19, , and featured the 16 Hall of Famers listed above who went into the Hall of Fame as Browns. Play-by-play announcer Jim Donovan calls games on-site alongside color analyst Doug Dieken , a former Browns left tackle , and sideline reporter Nathan Zegura though Zegura is currently serving an eight game suspension due to arguing with officials during a game.

The Browns have either directly or indirectly been featured in various movies and TV shows over the years. From Wikipedia, the free encyclopedia.

See templates for discussion to help reach a consensus. Cleveland Browns —, —present Suspended operations — History of the Cleveland Browns. List of Cleveland Browns seasons. Cleveland Browns logos and uniforms.

Cleveland Browns roster view talk edit. List of Cleveland Browns Pro Bowlers. Cleveland Browns retired numbers view talk edit. List of Cleveland Browns starting quarterbacks. List of Cleveland Browns first-round draft picks. List of Cleveland Browns head coaches. Cleveland Browns staff v t e. Pro Football Hall of Fame. See Table 4 on page for the 1. Also, this paper estimates days of life lost from almost all violent death causes and comes to an extremely similar number as Beltran-Sanchez sum the entries in Table 3.

Otherwise, it just sounds like argument from assertion. Hey, wait a minute. You might be able to sketch out a more formal argument against it from http: No single giant battlemech could maximize all combat parameters relative to other equally expensive battlemechs , but all of them could defeat a human in single combat. Yeah, if you could clone Johann von Neumann, that would be pretty nice.

Aside from the weirdness of guessing that hey, for all we know space probably ends a few inches above your head, the person saying this has never seen an elephant.

Why would someone go through the effort of making their visual pattern recognition AI frex have its own set of goals that it can take actions to fulfill? But there are a bunch of teams who are specifically trying to develop general intelligence, and it seems like maybe one of them will succeed. If we accept that some kind of general reasoner is possible, then people will try to build it, and maybe some of them will succeed.

Well, in the trivial sense, building a general reasoner is definitely possible: If we assume that there are some non-trivial physical limits on computation, then this could be a huge obstacle in the path of the Singularity.

Rather, I am open to being convinced. Has this really been solidly established? Sure, if you build a Jupiter-sized AI general reasoner out of 31st century technology, it might be worse at number theory than a Jupiter-sized AI specialized number theory agent made from 31st century technology. But it might still be vastly better than a human. If you need to screw in an ordinary Phillips-head screw, then your Swiss Army multitool will probably do the job.

If you want to fix the tiny screw in your glasses; or a huge rusted bolt; or a tricky hex-bolt all the way inside your engine; then you need a special screwdriver. Several of them, in fact. They will all be about as big as that multitool, but they will be way better at their specific jobs.

What is the significance of that success? I think it is large, but finite — and much smaller than you seem to think it is. Your fundamental appraisal of the value of intelligence is out of wack, in my opinion. Ender Wiggin crushes the opposition at fistfights and computer games and war. In real life the computer games part is because he is smart, the war part is partly because he is smart and partly coincidence, and the fistfight part has nothing to do with him being smart.

We have a good idea of what intelligence is. In books they do accrue, and thinking about how to fight makes you a much better fighter. The power of reason manifests, the hero thinks — when he punches like this I shall move like that, then my arm will reach up like this — and kaPOW! Yet in real life — no. I think the hysteria about artificial intelligence owes itself in large part to people not understanding this. Moreover, I would expect our brains, and the resources we devote to information more generally, to go in the other direction if they were to change significantly.

We need general intelligence because of the great novelty of our environment. One might assume the environment will continue to change the way it has for the past few hundred years indefinitely, an almost infinite explosion in technological capability. Fundamentally I look at technology as exploitation of new prime movers.

First we have our muscles, then we have beasts of burden that can produce an order of magnitude more power. Then we have combustion engines that can go a few orders of magnitude above that.

Project Ulam envisioned a spaceship several times larger than the largest container ships, blasted into space by thermonuclear explosions. There will be no more revolutions like moving from horse cavalry to tanks — just gradual progress, like the current crawl forward of tank to slightly better tank. Being super-smart can only allow you to exploit new reservoirs of power if there are new reservoirs of power to exploit.

The geniuses behind the Manhattan project very quickly moved from nothing to nuclear bomb to thermonuclear bomb to miniaturized thermonuclear bomb to…. Slightly more miniaturized and streamlined thermonuclear bomb. First we exploited the power that keeps the earth molten beneath our feet — then the power that keeps the sun burning in our sky.

The universe is pretty well characterized. There are mysteries, but the mysteries are smaller and less promising than ever before — because our theories are more powerful and resilient than ever before. You can imagine some transcendent manifestation of one virtue clawing in everything, but in fact the beings that claw in the most have a mix of virtues.

A machine can be a lot better at information processing. Also it can develop a lot more physical power. Would you rather have a super powerful tank or a super smart computer? What if the super smart computer has a tank? The argument is valid but uninteresting, because what we are concerned with is the safety of a computer, and an ASI that is inadequately boxed has the to opportunity to wreak havoc by taking over automated weapons systems, and in many other ways.

That is to say, intelligence can help you achieve your goals and defeating your enemies , but it is not sufficient by itself.

They were right about the stuff they knew they were right about. That would be like comparing many-worlds theory to quantum electrodynamics.

They are not perfect — footnotes need to be added about very small and very large energy-scales. Footnotes may need to be added to quantum electrodynamics and the theory of relativity. But all they ever will be are footnotes, because the excellent records of predictability within the bounds of what they try to describe stand, and can never be erased by any new discovery. We know about a lot more stuff than in In particular we know what lights up the universe.

This results in very diffuse matter, because photonic interactions drive the clustering of matter into stars and planets. We understand pretty well how the dense stuff that we care about works. We could conceivably profit greatly by greater understanding there, like if we figured out how to explode neutron stars so we could harvest the resultant scattered high z material.

But fundamentally that would just be another way of mining high-z material, not something radically new. The point is that the set of stuff we care about is relatively limited, and we have a foundational understanding of it. The jump from nothing to here is unimaginably larger than the jump from here to anywhere else. No dark energy reactors, no antigravity drive, no faster than light travel. We know what the rules of the game are at the scales we care about.

I think this fundamentally misinterprets the concept that modularity proponents are thinking of. In a big complex program that can do many real-world tasks, typically there are many, many small subsystems, some of which are used for almost all tasks, some only for a few tasks, others in-between. A real task will involve some combination of many modules interacting with one another, each doing a small part of the work.

And of course tasks will have both kinds of modularity. Whether or not you think minds are a special case, simple and general unlike other software systems which are usually complex and modular, is a different question. That said, of course an AI could be dangerous. In fact, we are experiencing some of those dangers right now, e.

A world in which getting tanks is easy will not exist. Either the tanks will not be connected to the net, or the net will be rendered secure even against AI-level hacking, or merely human net wars bleeding into meatspace will render civilization incapable of maintaining the net.

I think if it manages to actually execute an NP algorithm in smallish-power polynomial time…. Well yeah, and if it manages to build an FTL engine, an inertialess drive, a perpetual motion machine, or some gray goo nanotechnology, that would be pretty cool, too. First of all, thanks for the reply, I really do appreciate it despite my abrasive demeanor.

And now, on to more abrasion:. I think that this is the weakest point in the original article, IMO. We can all agree that Mozard, Beethoven, and even Jimi Hendrix are better than your 3-year old upstairs neighbour, but who is better: Jimi Hendrix or Beethoven? Jimi Hendrix or Slayer? It is no longer so easy to decide. Well, it depends on what you are trying to achieve; however, all of them are absolutely smarter than a rock.

I agree with you about generality being a continuum; however, I am not convinced that the right end of the continuum can be extended indefinitely toward perfect generality. The battlemech can crush a human every time, but it may not be able to boil the perfect egg or even any egg for that matter or write a sonnet. I am not convinced that the same tools both mental and physical that are useful for crushing humans are also useful for writing sonnets, and vice versa.

Certainly, humans can do both, but they do so rather poorly. Yeah, this is the second weakest argument in the article, IMO, probably due to poor phrasing. Smarter than a normal human? Smarter than the smartest human who ever lived? Well, maybe, assuming that this claim is even coherent as per above. No, most probably not. There are physical limits involved. Yes, and I was unimpressed with his articles although I must admit my weakness to the Avatar meme.

Imagine that you live in Ancient Greece. You are crazy smart. Smarter than Von Neumann. Smarter than ten Von Neumanns. Without ever stepping outside of your bathtub, would you be able to think your way toward correctly predicting black holes? Or even cell theory? I would argue that you cannot, for two reasons. Secondly, even if you did possess those concepts, there are a wide number of perfectly internally consistent and elegant models that can explain them; and most of them are wrong, and you will never find out which is which until you actually get your hands on some germanium.

Furthermore, the stronger version of this claim is IMO even less defensible: You already know my answer: Everest is larger than humans along all dimensions. Again, the degenerate case it Watson, which is great at Jeopardy but awful at everything else. This seems like a strong answer to the contingent argument.

I worry the original article was making a necessary argument, that if something is good at one thing it must be worse at something else. That seems completely wrong to me; humans are smarter than bacteria along pretty much every dimension, no tradeoffs required. Part of me wants to argue that it would be very strange if the maximum computation per unit area were anywhere near human scale, but I feel like maybe we should just avoid that entire argument.

Fine, maybe cramming computational power into a small area is hard. So make a bigger computer! This is why e. Ok, how much bigger? Bigger than our galaxy, maybe? This is how we got into this whole argument in the first place. Furthermore, I am far from convinced that computation can be scaled even linearly with volume without running into some pretty serious diminishing returns.

Agreed, though I am not sure how much further. This is where we run into a problem, because tests take time — and they take the same amount of time regardless of how smart you are. And if you want to confirm the Higgs Boson, you need to build a supercollider. This will take longer than growing some rice. And if you want to land on Alpha Centauri Bb… hoo boy. This problem leads to two immediate consequences. Furthermore, things like supercolliders are incredibly expensive; meaning, they consume a significant portion of resources that are available to us.

The AI would have to compete with other actors notably, humans to acquire these resources. A computer is better at everything than a computer. I agree we can imagine a system of physics that limits intelligence at around this point.

I feel like this might be our most fundamental disagreement. Smart people have an advantage in knowing what tests to do, and knowing how to design the tests well. Just to give an example, if I knew all of modern science including the experiments that had been done to prove it and my only job was to replicate all of those experiments and confirm that they still worked, I could probably do most of it in a few months to a few years.

Building a supercollider would admittedly be the hard part, but not if I was super-rich and had the resources of an entire civilization, and there might be ways to avoid using supercolliders if I were smart enough to think of them. But Napoleon conquered Europe without needing to do any tests, and Einstein discovered relativity without making a supercollider. We have real life humans now who are much much much smarter than their peers. Did John von Neumann take over the world?

Was he even as formidable an adversary, if you could choose between the two, as a young, strong, low FTO thug? They are the best fighters, the best manipulators, they win all the time at everything.

They win so much they get tired of winning. You might put a bit more weight on what actually happens in the real world. Intelligence, practically speaking, is one factor determining how efficiently you use the resources at your disposal. You can use intelligence to gather more resources — true. You can use strength to gather more resources — true. You can use beauty to gather more resources — true. You can use social acumen to gather more resources — true.

You can use resources to gather more resources — true. None of these differ cardinally in that regard. If you have a lot of money you can invest it relatively safely and make a decent return.

If you are very strong you can win fights and contests and make a very significant amount of money, which you can then invest — etc. If you are very beautiful, fast, socially adept — all the same. The thing is that while we live in a world of contests, and the theoretical reward for being able to win at some domain constantly is basically infinite, pointing that out ignores the also-infinite theoretical exploitation of other traits.

Can you imagine a being so intelligent that it foresees everything, makes all the right decisions, takes over the world? Can you imagine a master manipulator so expert that they get everything they want out of their victims?

Can you imagine a girl so beautiful everyone she meets falls in love with her and wants to please her? I can use my tank to blow up your computer. Oh — but you can use your computer to make money on the stock market and buy lots of tanks! But I can use my super-tank to intimidate people into paying me tribute and buy still more tanks. Maybe more tanks than any amount of strategic genius on your part can counterbalance.

I tell you, I bet on the people putting whose plan is making the largest possible number of the best possible tanks, if I had to choose. I think we are arguing about matters of degree, not of principle. Yes, of course computers will keep improving — but I am not willing to believe that a. At least, not without some additional evidence. I think that, as the original article says, you are vastly overestimating what can be accomplished with intelligence alone, as well as how far general human-style intelligence can be increased by conventional means as contrasted with, say, Dyson spheres and such.

I think that the problem is that you have just two categories in your mental model of intelligence: But I disagree with this model; I think there is a spectrum between our current human level and the Singularity.

That said, I am denying the claim that you can move across this spectrum in the blink of an eye. By analogy, I think that commercial airplanes will keep getting faster. However, I would disagree that, because of this fact, we should worry about people using commercial planes as light-speed projectiles. My point was, that in our current world today we already have half of the thing you described: I did grant you that they do so at ordinary human speeds, not x human speed.

In the article you link, you grant your hypothetical AI many other powers besides these. Our science took hundreds of years at least! Building a supercollider would admittedly be the hard part, but not if I was super-rich and had the resources of an entire civilization…. As I said in my previous post, if a superintelligent Singularity-grade AI already existed, then it could totally have all that. But seeing as it needs to have such resources in order to exist in the first place, this looks an awful lot like begging the question.

Forget bosons, how will you make rice or cows grow faster? This is obviously false. Napoleon performed plenty of tests.

He sent out scouts, organized logistics, and even fought real battles and learned from the results. Napoleon may have been a military genius, but at the end of the day, he still had to fight in the real, physical world. Napoleon rather famously failed to conquer Europe, got himself boxed on Elba, escaped the box, failed to conquer Europe again in spite of the full-scale test data from his first experience, got securely boxed on St. Helena, and never escaped that one.

However none of those three statements are required for superintelligence. As long as there is sufficient space beyond human, then a superintelligence will be possible.

On this it should be noted that, IIRC, the timescale for a hard takeoff is up to one year, not so fast that nobody notices. And even on that point it is not accepted by everyone, there will still be those who argue for a soft or medium takeoff.

No matter how fast superintelligence develops, the problem of unfriendly AI would still exist, it just gives more time to prepare in the latter cases.

As for part C , as has already been pointed out it is not required that the size of a computer be limited in some way, however given that we already know that human level intelligence can be achieved in with the size of a brain it does not seem to be an incorrect assumption that future AI will not be supermassive, given the advantages of silicon over neurons.

Then you appear to be make two competing claims. First you state that you believe a spectrum of intelligence will have quite a distance between human, post human and full superintelligence. Do you believe that something recognized as superintelligence can exist but humans will not achieve it, or that it will simply take a really long time?

The former I would find somewhat incoherent, so if it is the latter I would say that the burden of evidence for this rests with you, not with Scott or other supporters. Since dangerous superintelligence has apparently now become the establishment view, i would ask what evidence you have that the majority of AI experts are wrong in their assumptions?

Then you move onto claims that superintelligence could not achieve much without resources and tests. I think this is a much more defensible claim against superintelligence. I would use the following arguments against it: Even with the admitted danger, people are still not going to want to build a superintelligence and then simply lock it away, what would be the point?

So it will probably have access to a lot of human knowledge, and not need to replicate vast amounts of science. It will probably have a way of physically interacting with the world like cameras and robotic arms. It may even get access to the internet, and subsequently be able to hack into a lot of more secure resources.

Secondly, I would suggest that simulation brings a lot of risk. Even if the AI is very heavily supervised in its interactions with the outside world, what is to stop it from creating an accurate physical simulation and running experiments within it?

Knowledge of physics and superintelligence would probably be enough to devise an extremely accurate world simulation that could be relied upon to perform experiments and then take actions in the real world. Huge supercomputers crank away for days to simulate a second or two of a nuclear explosion. First of all, I agree with Scott that modern-age humans are not the pinnacle of intelligence.

Future humans may be smarter, and AIs may be smarter still. However, I disagree that AIs will one day get so smart as to become godlike. I think one problem we are having is that of semantics: Regardless of how smart AIs can practically get, I doubt that they will be able to do so incredibly quickly. Still, here are a few of them:.

Surely, both you and Google are smarter than a rock; but is Google smarter than you? When I want to add up numbers very quickly, I use Excel, despite the fact that modern humans outsmart Excel by a massive margin.

If you insist on making your AI as general as possible or perhaps as human-like as possible , you may be placing significant limits on what it can achieve in any specific area. You also need physical resources, labor, and, above all, time. Combined with the point above, this puts some pretty harsh limits on what an AI could reasonably achieve. Compare these powers with FTL travel: The whole point of running experiments is to figure out the rules that govern the world, so that you can program them in your simulation.

I guess I should also point out that, despite all of my objections to the Singularity, I do agree that AI can be dangerous — just like nuclear power, fossil fuels, mass production, the Internet, and even fire. Consonantly, in the present-day AI Golden Age, human-level cognitive performance is exhibited by algorithms that have little or no ratiocinative component whatsoever. I guess I should also point out that, despite all of my objections to the Singularity, I do agree that AI can be dangerous — just like nuclear power, fossil fuels, mass production, the Internet, and even.

My first point leads into the second about simulation; i would expect any superintelligence to be at least given some knowledge and resources, or else it would be entirely useless, and from that I assume it would not need to do physical experiments to gain the knowledge required for an accurate simulation. So regarding your points:. On intelligence as linear quantity, I agree that there is probably a lot to human intelligence, and even with something like IQ you might witness a lot of variance in certain domains between two individuals with the same value.

But I fail to see how this makes the risk of superintelligence less. Your final two points, I believe, are arguments against some utopian vision of superintelligence but do not reduce the dangers of superintelligence.

My first two points were meant to attack the idea of an exponential growing intelligence. Today, we have AIs that can compose poetry, recognize images, plot routes, play Go, etc.

Some of them e. However, they do so in a way that is very different from how humans, with their more general intelligence, approach the same tasks. It may very well be the case that general intelligence is simply not the right tool for the job. You are right in saying that an AGI could just buy an Excel license, but then, so can anyone else. The claim is not merely that AGI is dangerous — any technology is dangerous, after all — but that it is orders of magnitude more dangerous than humans, because it can improve itself exponentially and nearly instantaneously.

By contrast, if you had an intelligence that developed at the same rate as humans do, then it would be only as dangerous as the average human, and we already know how to deal with those more or less.

I admit that this is probably the weakest of my arguments. And slow exponential growth is no danger; or, at least, not any more dangerous than the growth we humans have been experiencing throughout history wich is pretty dangerous, admittedly.

The real world is super slow; for example, it takes a whole year just to observe all the four seasons. And finally, even if the AI could somehow become superintelligent, it is only dangerous if it can actually do something with that intelligence.

And if it needs to acquire these powers in order to become superintelligent in the first place, then the whole idea is a bust. Are ten Von Neumanns smarter than one Von Neumann? Ten Von Neumanns could do about ten times as much work as one Von Neumann, barring overhead. They could explore ten alternatives in parallel, rather than in series. The Singularity is a point of sudden runaway technological growth initiated by an artificial intelligence.

We cannot predict what will happen in such an event. It could, for example, be a result of technologies that raised human intelligence, with that increased intelligence used to improve those technologies, with those improved technologies used ….

I wonder if the computer science graduate shortage could be a sort of paradox like that, where there is a shortage of truly highly skilled candidates for employment, but an excess of people who are credentialed but lack the actual skills required.

This, combined with the fact that a bad programmer can actually have negative net productivity, makes it completely unsurprising to me that some people with CS degrees are unable to find jobs. Of course, this is just my two cents. I would like to see more thorough studies that try to quantify these effects rather than comparing raw unemployment numbers. Am programmer, can confirm or rather, add a data point. At best, maybe Most common failure modes include, in order of decreasing frequency:.

And good programmers frequently get jobs via connections and never show up in resume piles at all. Shame about what happened to that guy.

Really good writer on tech issues, but the brain eater got him a few years ago. Er… what happened to him? I used to read his articles back in the day, but eventually I stopped — partially because I moved on to other things, and partially because he started repeating himself….

Oh, and yes, you can extend Ruby with C, and that is the right way to write a dynamic application with a small critical section. It was similar to the FizzBuzz problem, only perhaps a little easier if you can believe that. Is this just a matter of all CS degrees not being created equal and the candidates people complain about coming from diploma mills? Can employers really not distinguish the diploma mills from the legit schools? Looks like I can get a job at your firm. I like that link, thanks.

Our version of the problem did not need the modulo operator, and most people still failed for reasons 1 and 2. Though I was thinking that it would depend on the language on whether it was more efficient to use modulus, or to have counters to 3 or 5, which of course would use only addition. That was my guess too — see my replies to Freddie here. Try the colossal time spent on mostly failed job applications, endless hours sitting in rooms watching people fail to code. So back to those CS grads deBoer is talking about.

What do you think of http: I stopped looking at resumes for the candidates I interview — it was too depressing. So if you see it on a resume, then… yeah stereotypes. Only a few very privileged companies can afford to do that. If that sounds like a machine learning problem, well then there you go.

If you are a small software startup, you can afford to individually interview every applicant, observe how they solve real-world problems, and maybe even mentor them. These filters have to be super-efficient otherwise you just get bogged down again ; in CS terms, you are willing to accept some loss of correctness in exchange for execution speed.

Now, whole industries spring up around evading your filters, selling you better filters, evading these better filters, and so on, until all is consumed in the fiery maw of Moloch. Hiring for programmers usually comes in several stages, with later stages considering fewer people but expending more effort per person. By far the biggest reduction in people-per-stage happens in the resume screening phase, where you take a stack of resumes or LinkedIn pages, or whatever and decide which ones look most interesting to you.

This is typically done by hand, and manages to be both labor-intensive and surprisingly difficult: You could invest more in training people, but that still leaves the problem of identifying the people who are worth training — and that was most of the problem in the first place! The folks over at Triplebyte are making a valiant attempt at it. So, bringing it back to the topic of the rest of the thread: Thomas Ptacek has good thoughts on hiring for tech.

The process should be standardized as much as possible. But read the whole thing. Yesterday a coworker interviewed someone with a masters in comp sci and apparently going for their PhD who could not figure out a dead-simple algorithm or properly differentiate between or use lists, dictionaries, or maps in python. When any developer gets into a role of hiring people, he can retroactively win all his flame wars from 2 by enforcing his rules from 1.

They were likely completely random or even bad. By the time anyone figures out that the hiring process sucks, there is a new batch of developers coming through with their own biases. The pre-interview steps are probably often worse than useless that is, you might have a higher proportion of better programmers among your culls than your picks. New grads, on the other hand, are appealing to recruit: However, I never got the impression that it was the aptitude with the technical skills was the main reason for rejection.

The impression I got was that they were rejecting people mostly on grounds of mismatched personality. And to at least some degree, those are actually relevant. A couple of times I was asked to send in some basic stuff beforehand — easily faked by someone with access to a few bucks and a small jobs board.

The only time my ability was actually properly tested was one time that I was asked to work for free for a week. But I got that far without any actual checks except references. This matches my experience. I work at a company just over people. We get hundreds to thousands of resumes for every position.

At most a handful are capable of holding enough of a conversation or putting together a good enough writing sample the latter on their own before the interview, not under time pressure for us to even believe we can train them well enough to put in front of a client. I do think part of the problem is turnover, too. I also think there is simultaneous over- and under- supply of STEM workers.

STEM is a ludicrously ill-defined field. It turns out that the whole metric is sloppy and absurd, so the two claims are actually consistent. I do interviews at a technology company. Actually, what I said about never being tested was wrong.

I had forgotten one interview in which the interviewer was a techie and did some basic probing into whether I remembered anything at all about RIPv2 from a high school CISCO certification. Never happened again before or after, though. I guess we have different experiences, but it seems based on other comments here and on the subreddit that most technology companies have to impose basic checks on programming competence in order to weed out completely unqualified candidates.

How difficult were your CS classes? How far is this comic from the truth and how different do you assume this to be in other countries?

Which is to say, not exceedingly difficult for competent students, but still requiring strong logic and math skills and a willingness to put in long hours on assignments in addition to lectures and labs. I assume that Canadian universities are generally comparable to US universities. And then they come out with a qualification and the companies all want, as is pointed out, the stars: I also think the focus on CS degrees vs. CS jobs might be slightly misleading. This has been my experience.

After that I thought it would be easy for me to find my dream job. I see two competing explanations for the alleged STEM worker shortage. A plumber with tens years of experience may or may not be good at plumbing, but the good ones are probably able to handle just about any plumbing task that might arise. In contrast, a good node. People who majored in computer science or electrical engineering in were probably better equipped to do basic job tasks when they graduates than the same people today.

Evidence against this would be the numbers showing that the number of STEM graduates now is about the same as it was in , and that overproduction of college graduates is entirely accounted for by non-STEM majors. The flaw is the assumption that the specialized skills are required to perform the job, rather than just stated by the employer out of either incompetence or a desire to not hire anyone combined with a requirement to advertise the position.

How much is it worth to the employer to have someone who knows those things, as opposed to someone who they have to train in them? Speaking as someone who currently hires people based heavily on a job knowledge test that amounts to a trivia test….

Getting people who are subject matters experts up to speed takes months, without it takes even longer. There is no shortage of applicants, but there is a shortage of qualified applicants.

I can confirm that many nominal CS graduates are incapable of answering the most elementary questions. We have have just as much difficulty hiring for a much cheaper, growing city as we do for New York.

My impression was that the same dynamic was very prominent for engineers—20 years experience as a working engineer as opposed to as a manager meant that losing a job was a disaster. Anecdote to the contrary: Last year I found another software engineering job actually 2 competing offers within a couple of months; this is after 24 years in the business.

This is likely largely because I have a trendy tech company on the resume. But then again, I was recruited by them when I had 18 years in the business.

There were no close states that could have flipped to give Hillary a victory. Trump must be the luckiest guy the world. Apparently a complete incompetent who just trips over a small loan of a million dollars one day and ends up with billions, drools his way through 14 seasons of a top-rated TV show and then stumbles into the most powerful office in the world by random chance. I think you are overstating the case a little. On the other hand, if he was as incompetent as a lot of the critics seem to believe, one would expect that he would have lost money, not made it.

This group interview with Trump biographers I think addresses this point pretty starkly:. But he goes to the closing, they sit up there and sign all the documents with all the mob guys, you know, to buy all the leaseholds.

And Fred and Donald leave and they go down to the limo, and somebody upstairs realizes that Fred missed one document. I mean, I had his tax returns at that time. He was worth nothing. Fred Trump died in Which skill set is more useful for running an already established and pretty phenomenal nation? Some guy works super hard, his kids see him work super hard and gain his work ethic. But they make sure to take time to raise their own kids, and all those kids see is that they are rich and dad works normal hours, I bet I can get that too.

In most industries, being qualified for a job is a step function, or close to it: With a large, exponentially distributed talent pool, all of these can be true at the same time. In fact the talent pool is even more distributed than this: First, a good employment sector to compare STEM to at this point might be pro sports or entertainment.

Second, one reason why a tech company might outsource rather than hire minimally-qualified local workers is cost-of-living. I wonder if it would be good business for a big tech company to set up a campus in Oklahoma City or something, hire a bunch of minimally-qualified programmers, and start building business apps and login pages for the entire US. Music from pre-Edison to now is one of the key go-to examples, which fits under your entertainment comparison.

It could just be that you and your peers are clustered in the same area of the scale. There are many studies on programmer productivity, and the data seems pretty conclusive …If there are newer studies with contradicting findings, please share. While I was very cool on the whole March for Science thing and we had corresponding marches in Europe why, exactly? There was a lot of virtue signalling about being anti-Trump going on there which was ludicrous , I have some sympathy for them on this: But the people who like to go on protests and put up selfies of themselves being achingly politically correct with their clever placards got their day out, and at least no windows were smashed in the process not that I heard of, anyway so that was nice.

Though on the other hand — too uncool to be infiltrated by the Black Bloc? Some antifas set up a table at the march I went to. Not an official table, they were not sponsored by the organizers, they were basically just squatters. Violence always gets more coverage. I wonder what the equivalence curve is; how many peaceful protesters do I have to assemble to get the same media coverage as punching one nazi?

I wonder if protesting has essentially become a cargo cult. I think leftists attacking Trump supporters during the primaries did more for Trump than women and science fans marching unmolested against him did after the election.

What about actual cargo cultists? Did they really expected airplanes full of cargo to land as a result of their rituals, or it was more like a standard religious thing, where they gathered together and had a good time performing some rituals, showing off to their peers how pious they were, without actually expecting anything unusual to happen? I feel like a broken record on this, but: They are about building up energy on your own side: The Democrats have been doing ridiculously well in fund-raising lately:.

Darrell Issa R-CA alone. Elizabeth] Warren over the course of an entire election cycle. In less than a day [over AHCA], we will raise more than we raised for in Ossoff in a week — which is more than we raised for Warren in a year.

The post-Japanese war Russian revolution of might be the ur-example. Or the German revolution of Mass civil disobedience, or worse. Faced with such protests, government would be dead serious: On the other hand, all the examples I can think of date prior to modern mass media.

And statistics and opinion polling. A mass gathering in the market square might be of far more importance, if the only way you can know what happens there is to be present there, instead of observing via TV.

A very important unrest might involve a truly significant proportion of the local populace. And other hand, all the people would not be thinking that the silent majority was staying at home listening to news reports about their signs.

It would feel like more like it truly was a demonstration of the will of the people rising against the government, and the individuals staying at home might even agree. Somebody shot at the protestors at Maidan. And then suddenly the then-president was giving speeches how he is the lawful president, from abroad.

Or another more recent inverted example of vaguely similar thing: In the link about the scientific mavericks at http: I think a third party observer would see that as quite a weak excuse. This is true only in an unhelpfully narrow sense. The steelman of the claim that this changes nothing is something like: I should note that I feel the same way as you as far as misinformation and hysteria about Internet privacy, but this is one of the first issues in that arena that has me at least a little concerned.

The main difference to me between ISPs and every other service is that those are generally discretionary in a way that ISPs are very much not.

Alternatives like DDG are good enough for plenty of people as half of HN will rush to tell you every time Google comes up and I personally know a couple dozen people who chose not to use Facebook and have very active social lives. Though that can be sometimes uncooperative e. Though if enough people feel their privacy violated enough to use tor, one can hope more websites will allow its use. And since standards-oriented things like the Web are always multi-party, a lot of these issues are an intractable coordination problem: And the we need to look into why that happens?

Does it have any relation to how heavily the communications market is regulated and how hard it is to establish a new company in the space which creates so much entry barriers and compliance costs that only the most deep-pocketed and deep-connected companies dare to enter it?

Oh, it was deleted. Yeah, sorry, that was unfair to you but I wanted to enforce my own rule at least on myself. If you want to talk about it more feel free to shoot me an email. I think the reason to be uncomfortable with the march for science in the form it happened in is that it risks making support for science into an issue of partisan divide. There are plenty of examples of issues that are in principle not partisan but because they get associated with one side of the aisle the other side starts reacting very negatively.

I just think there is very little benefit for combining them and a grave potential risk. This march made liberal points about science very visible while not doing so for any conservative issues pro-GMO, studies of the benefits of trade etc..

I recall liberal essayists making similar criticisms about the failure of Occpy Wall Street. There has been no shortage of nonpartisan calls to support greater science funding in the past or greater use of science in policy but those are never going to get anyone but a few scientists out on the street to protest.

Even if a cause starts nonpartisan as this one may have people will try to bring in their favorite partisan issues and make the march partisan. As organization of such marches tend to spread through social networks it will be hard to avoid attracting far more of one party than another…and as that organization leans towards including other issues they care about it will further exaggerate that partisan leaning.

In other words conservatives are still the law and order party and liberals the anti-authoritarian party. Liberals can point to people suffering right now while conservatives generally point to abstract rights violations or the dangers of government overreach or the risk of tyranny. As a result I believe liberals also tend to skew more towards the age rage who is likely to go to marches to have fun, hang out and get dates.

I think the general issue is the public good problem in changing the world. I get a tiny share of that, and I know my efforts have a tiny effect, so the payoff to me of my efforts is unlikely to justify the cost. We solve that problem by linking the activity that is intended to change the world to indirect private benefits, most obviously an opportunity to socialize with people who have a lot in common with you, but also various sorts of rewards in fun, status, and the like.

But then the activity tends to optimize for those purposes at some cost to its ultimate objective. The liberals are anti-government? Meanwhile, the conservatives who control the government are theoretically anti-government and in favour of limiting their own power, although mysteriously this fails to materialize in practice. To what extent is 2 the result of 1 do you suppose? Ideology is flexible enough to make a virtue out of necessity a lot of the time.

Be a lot more useful if instead of marching to get government funding they just held fund raisers. I suspect being a marcher is more important than the cause though.

From experience and observation, this is similar to highly-addictive video games like league of legends, world of warcraft, etc. Of course the latter group will have worse outcomes.

It looks like they used the first year results as a control for the second year results, which was when the mandatory classes kicked in. Instead, they got worse, and the paper says the mandatory classes were responsible. So… is this really true? As I said downthread, this book is pretty fascinating. How can such failures in intelligence-research come about? Oh man, that sounds amazing. Is there a non-paywalled copy available somewhere? Methods, Criticism, Training, Circumstances in which that essay appears.

Hasok Chang tells the history of temperature measurement beginning with the invention of the thermometer — the devices were accepted fairly quickly, but then scientists spent a surprising amount of time trying to decide on fixed points to use to standardize thermometers so that you could compare measurements. The boiling and freezing points of water at sea level air pressure seems pretty easy to modern audiences, but at the time it was pretty hard.

Does he say that? Could you give a precise citation? Or is it that he restricts his attention to those who accepted them?

That quote matches Jensen, that it took a century. Have you considered the possibility that he knows what he is talking about, rather than mangling what is to you easy to find under the streetlight? Applying the lessons of thermodynamical history to neural science, should we expect a clarified appreciation of cognition awaits a more nearly synoptic microscopic understanding of neural dynamics and anatomy? Another book focusing on the early history of thermometers is here: This is the venue in which all systems of measurement Celsius vs Fahrenheit, feet vs meters, etc.

It really says something about your attitude toward religious traditionalists when reading the New Yorker gives you a better opinion of them. Yeah, most secularists have no idea what we are like, or what motivates us. Not in any reality-based sense. If she does think they are related, that is persuasive evidence of the general lack of comprehension cited above. No, that novel would be P. I am rather huffy about Atwood and her attitude to science fiction very happy to use the tropes, very unhappy to be lumped in with those grubby genre authors, she writes literary speculative fiction doncha know!

I actually feel the exact same way about Atwood, and most of her other stuff leaves me very cold ironically, most of it seems to suffer from the exact sort of failure mode science fiction most often runs into: They think women having kids is a divine blessing. Lots of them are women. I thought it was based on the Iranian Revolution, translated to America and Christianity so the reader could identify with it better. I read, and watch movies and TV, to escape from my own awfulness. Christianity as seen and presented by its critics, rarely includes much of Jesus or Salvation.

How could it, when those are the most clearly positive aspects of Christianity? The second question that comes up frequently: Again, it depends what you may mean by that. True, a group of authoritarian men seize control and attempt to restore an extreme version of the patriarchy, in which women like 19th-century American slaves are forbidden to read. The regime uses biblical symbols, as any authoritarian regime taking over America doubtless would: The modesty costumes worn by the women of Gilead are derived from Western religious iconography — the Wives wear the blue of purity, from the Virgin Mary; the Handmaids wear red, from the blood of parturition, but also from Mary Magdalene.

Also, red is easier to see if you happen to be fleeing. The wives of men lower in the social scale are called Econowives, and wear stripes.

I must confess that the face-hiding bonnets came not only from mid-Victorian costume and from nuns, but from the Old Dutch Cleanser package of the s, which showed a woman with her face hidden, and which frightened me as a child.

Many totalitarianisms have used clothing, both forbidden and enforced, to identify and control people — think of yellow stars and Roman purple — and many have ruled behind a religious front. It makes the creation of heretics that much easier. Just as the Bolsheviks destroyed the Mensheviks in order to eliminate political competition and Red Guard factions fought to the death against one another, the Catholics and the Baptists are being targeted and eliminated.

The Quakers have gone underground, and are running an escape route to Canada, as — I suspect — they would. In the real world today, some religious groups are leading movements for the protection of vulnerable groups, including women.

What would be your cover story? It would not resemble any form of communism or socialism: It might use the name of democracy as an excuse for abolishing liberal democracy: Thus China replaced a state bureaucracy with a similar state bureaucracy under a different name, the USSR replaced the dreaded imperial secret police with an even more dreaded secret police, and so forth. The deep foundation of the US — so went my thinking — was not the comparatively recent 18th-century Enlightenment structures of the republic, with their talk of equality and their separation of church and state, but the heavy-handed theocracy of 17th-century Puritan New England, with its marked bias against women, which would need only the opportunity of a period of social chaos to reassert itself.

Like any theocracy, this one would select a few passages from the Bible to justify its actions, and it would lean heavily towards the Old Testament, not towards the New.

Since ruling classes always make sure they get the best and rarest of desirable goods and services, and as it is one of the axioms of the novel that fertility in the industrialised west has come under threat, the rare and desirable would include fertile women — always on the human wish list, one way or another — and reproductive control.

Who shall have babies, who shall claim and raise those babies, who shall be blamed if anything goes wrong with those babies? These are questions with which human beings have busied themselves for a long time. She thinks Christianity can be used for good or ill and wrote a story where the latter occurred.

I do agree that much of both the marketing and the fan commentary are around the lines you mention, and are dumb. Watching the Hulu show, it did seem eerily relevant, but only in the sense of how fragile our current order is and how easily and quickly it could dramatically change under the right circumstances. The colour symbolism of iconography and the clue is in the term there derives from the Greek tradition , where blue refers to the heavenly, the divine, and red to the mortal so that is why icons and images of Christ have red robes to indicate the Incarnation, I refer you to Roman Catholic imagery of the Sacred Heart:.

Blue signifies the heavens and the kingdom of God not on this earth. Byzantine icons of Mary show her with red outer garments and blue ones on the inside. This signifies her original human nature the red and her heavenly nature the blue. In Eastern iconography Mary was depicted in red or brown to depict her as a physical grounded being but the earliest icons depict her in blue.

It could have depended on the availability of pigment. Lapis Lazuli was ground to create the blue colour and was a very expensive stone. Icons of Christ will show him with Blue outer clothing and red inner clothing. His outer garments are blue and symbolize his true divinity.


Leave a Reply