この手の、漢字の字面（じづら）ゆえの誤解の最たるものは、「交戦権」だ…。 『第9条 日本国民は、正義と秩序を基調とする国際平和を誠実に希求し、国権の発動たる戦争と、武力による威嚇又は武力の行使は、国際紛争を解決する手段としては、永久にこれを放棄する。 ２．前項の目的を達するため、陸海空軍その他の戦力は、これを保持しない。国の交戦権は、これを認めない。 RENUNCIATION OF WAR Article 9. Aspiring sincerely to an international peace based on justice and order, the Japanese people forever renounce war as a sovereign right of the nation and the threat or use of force as means of settling international disputes. In order to accomplish the aim of the preceding paragraph, land, sea, and air forces, as well as other war potential, will never be maintained. The right of belligerency of the state will not be recognized. 』とされている…。
それで、「The right of belligerency of the state」の訳語を「国の交戦権」とした…。 そういう訳語を当てたものだから、世間の人々は、「国家が交戦する権利」と解している人が殆んどだ…。極端なことを言う人だと、「敵国が侵攻してきても、これを撃退しようとして、「交戦する権利」は一切認められない。それが、憲法の趣旨だ！」などと言う人も出てくるしまつだ…。 冗談じゃない…。そういう「腰の抜けた」ことで、一国の存立が図れるか…。「国家」というものは、今現在生きている人のためだけのものじゃない…。あなたたちの子・孫・その子孫、営々と継続していく子孫のためのものでもある…。 幸い、学説の多数説、政府見解は、「国際法上交戦状態の国家にも、認められている種々の国際法上の権利」と解している…。 「船舶の臨検・拿捕、占領地行政等の権利など」と解するわけだな…。
『A new generation of specialized hardware could make drug development and material discovery orders of magnitude faster. by Karen Hao Nov 20, 2019
At Argonne National Laboratory, roughly 30 miles from downtown Chicago, scientists try to understand the origin and evolution of the universe, create longer-lasting batteries, and develop precision cancer drugs.
All these different problems have one thing in common: they are tough because of their sheer scale. In drug discovery, it’s estimated that there could be more potential drug-like molecules than there are atoms in the solar system. Searching such a vast space of possibilities within human time scales requires powerful and fast computation. Until recently, that was unavailable, making the task pretty much unfathomable.
But in the last few years, AI has changed the game. Deep-learning algorithms excel at quickly finding patterns in reams of data, which has sped up key processes in scientific discovery. Now, along with these software improvements, a hardware revolution is also on the horizon.
Yesterday Argonne announced that it has begun to test a new computer from the startup Cerebras that promises to accelerate the training of deep-learning algorithms by orders of magnitude. The computer, which houses the world’s largest chip, is part of a new generation of specialized AI hardware that is only now being put to use.
“We’re interested in accelerating the AI applications that we have for scientific problems,” says Rick Stevens, Argonne’s associate lab director for computing, environment, and life sciences. “We have huge amounts of data and big models, and we’re interested in pushing their performance.”
Currently, the most common chips used in deep learning are known as graphical processing units, or GPUs. GPUs are great parallel processors. Before their adoption by the AI world, they were widely used for games and graphic production. By coincidence, the same characteristics that allow them to quickly render pixels are also the ones that make them the preferred choice for deep learning.
But fundamentally, GPUs are general purpose; while they have successfully powered this decade’s AI revolution, their designs are not optimized for the task. These inefficiencies cap the speed at which the chips can run deep-learning algorithms and cause them to soak up huge amounts of energy in the process.
In response, companies have raced to design new chip architectures that are specially suited for AI. Done well, such chips have the potential to train deep-learning models up to 1,000 times faster than GPUs, with far less energy. Cerebras is among the long list of companies that have since jumped to capitalize on the opportunity. Others include startups like Graphcore, SambaNova, and Groq, and incumbents like Intel and Nvidia.
A successful new AI chip will have to meet several criteria, says Stevens. At a minimum, it has to be 10 or 100 times faster than the general-purpose processors when working with the lab’s AI models. Many of the specialized chips are optimized for commercial deep-learning applications, like computer vision and language, but may not perform as well when handling the kinds of data common in scientific research. “We have a lot of higher-dimensional data sets,” Stevens says—sets that weave together massive disparate data sources and are far more complex to process than a two-dimensional photo.
Initially, Argonne has been testing the computer on its cancer drug research. The goal is to develop a deep-learning model that can predict how a tumor might respond to a drug or combination of drugs. The model can then be used in one of two ways: to develop new drug candidates that could have desired effects on a specific tumor, or to predict the effects of a single drug candidate on many different types of tumors.
Stevens expects Cerebras’s system to dramatically speed up both development and deployment of the cancer drug model, which could involve training the model hundreds of thousands of times and then running it billions more times to make predictions on every drug candidate. He also hopes it will boost the lab’s research in other topics, such as battery materials and traumatic brain injury. The former work would involve developing an AI model for predicting the properties of millions of molecular combinations to find alternatives to lithium-ion chemistry. The latter would involve developing a model to predict the best treatment options. It’s a surprisingly hard task because it requires processing so many types of data—brain images, biomarkers, text—very quickly.
Ultimately Stevens is excited by the potential that the combination of AI software and hardware advancements will bring to scientific exploration. “It’s going to change dramatically how scientific simulation happens,” he says.
The chip must also be reliable and easy to use. “We’ve got thousands of people doing deep learning at the lab, and not everybody’s a ninja programmer,” says Stevens. “Can people use the chip without having to spend time learning something new on the coding side?”
Thus far, Cerebras’s computer has checked all the boxes. Thanks to its chip size—it is larger than an iPad and has 1.2 trillion transistors for making calculations—it isn’t necessary to hook multiple smaller processors together, which can slow down model training. In testing, it has already shrunk the training time of models from weeks to hours. “We want to be able to train these models fast enough so the scientist that’s doing the training still remembers what the question was when they started,” says Stevens.』