HJS
  • Home
  • About
  • Curriculum Vitae
  • Blog
  • Contact

Blog

Da Vinci — A scalable architecture for neural network computing (updated v6)

2/3/2021

0 Comments

 
The updated version of this presentation provides the detailed descriptions of two more projects:
  • Object detection
  • Body pose detection
0 Comments

Da Vinci — A scalable architecture for neural network computing (updated v5)

13/1/2021

0 Comments

 
The updated version of this presentation provides the following changes:

  • Additional application scenarios for artificial intelligence (upscaling and colourisation for video footage)
  • Information on our chip enablement layer and used computing language
  • Detailed instructions on how to prepare the development environment, the SD card image and the installation of third-party packages
  • Detailed description to create the first project: Colourful Image Colourisation
0 Comments

Da Vinci — A scalable architecture for neural network computing (updated v4)

4/11/2020

0 Comments

 
The updated version of this presentation provides the following changes:
  • Additional background information in the introduction section 
  • Shows comparisons of different processors for AI 
  • Describes how AI processor architectures will shift in the future
  • Describes the basic principles of Convolutional Neural Networks
  • Shows the advantages of our special compute units for AI
  • Shows Mind Studio in more detail
  • Shows an excerpt of listed models of our Model Zoo
  • Advises on how to start with our Atlas 200 DK developer board
0 Comments

Artificial intelligence that can write almost anything

30/9/2020

0 Comments

 
GPT-3 is an autoregressive language model that uses deep learning to produce human-like text. The following blog post gives an idea of the quality of this AI model. It is scary and fascinating at the same time, especially when you consider "fake news". Please note that the bold marked text is the input. The rest is the automatically generated text.

To read the full blog post, click here.
0 Comments

Book on AI applications using our Ascend chip

26/8/2020

0 Comments

 
Elsevier released the English version of the book Ascend AI Processor Architecture and Programming: Principles and Applications of CANN.
Picture
The book gives an in-depth description of artificial intelligence applications by using our Ascend chip and analyses the unique performance attributes of this processor. It also introduces theoretical aspects of artificial intelligence, describes the hardware and software architecture; and all related tools to program the technology. The book concludes with detailed case studies on data and algorithms for artificial intelligence. Enjoy!

Please use the following link to buy the book: http://shrnk.cc/ea922
0 Comments

The king is dead, long live the king! ... or key takeaways from the 55th TOP500 list

22/6/2020

0 Comments

 
The new TOP500 list of the fastest supercomputer was announced. I expected many things, but also have to admin that some things came as a surprise. Congratulation to Fujitsu, the entire Riken team and Japan to push an Arm-based system to the pole position of the new TOP500 list.
Picture

My key takeaways are:
  • The first supercomputer, which can already be called an exascale system, does not come from China or the USA. It is from Japan and used the Arm architecture!
  • The Japanese supercomputer tops the new 55th TOP500 list of supercomputers with 415 PFLOPS by using Arm-based processor.
  • The Japanese supercomputer is so powerful that it also scored number 1 on the HPCG and HPL-AI list.
  • The application requirements of OpenFoam, SPECFEM3D and WRF influence the design of the Japanese supercomputer strongly.
  • The company "Preferred Networks" has developed a highly efficient matrix accelerator (MAU), which reaches 21.1 GFLOPS/Watt by using PCIe adapters combines with low-power Intel Xeon CPUs. This system leads to the number 1 position on the Green 500 list, just ahead of NVIDIA.
  • China has with 226 by far the most systems in the new list, ahead of the USA with 114 and Japan with 29. France follows this with 19, Germany with 16 and the Netherlands with 15.
  • In terms of performance, the USA continues to lead with 639 PFLOPS, followed by China with 566 PFLOPS and Japan with 528 PFLOPS.
  • Among manufacturers, Chinese vendors are dominating. Lenovo has 180 systems, Sugon has 68 systems, and Inspur has 64 systems.
  • Overall, the total performance of the list has increased significantly by 35 per cent to 2.22 EFLOPS—previously it was only 5.5 per cent increase.
  • China continues to have a significant footprint in commercial systems on the new TOP500 list, which is 78% of commercial systems that represents 52% of the performance of all commercial systems.
0 Comments

Da Vinci — A scalable architecture for neural network computing (updated v3)

10/5/2020

0 Comments

 
The updated version of this presentation provides additional information on the applicability of artificial intelligence in modern medicine, shows more insights into the end-to-end life cycle of AI implementations in projects and gives more details of our software stack.
0 Comments

Da Vinci — A scalable architecture for neural network computing

27/3/2020

0 Comments

 
In this presentation, I give an introduction into microprocessor trends, describe two distinct eras of computing usage in training AI systems and show the wide variety of computing architectures in computer science. I also describe our advanced computing and artificial intelligence product portfolio, which focuses on innovation, continuous dedication and backward compatibility. The central part of this talk is insights into our Da Vinci architecture, descriptions of all building block, the core architecture and its micro-architectural configurations. Last, I show the process of how we execute artificial intelligence projects and the challenges which are still ahead of us.
0 Comments

Prediction of protein subcellular localization

3/3/2020

0 Comments

 
  • Use deep learning tools to accurately identify the organelles where proteins are located in human protein fluorescence micrographs
  • The trained model was executed on the Atlas 200 DK developer kit, and use Atlas 200 DK
  • The model analyses unlabelled protein fluorescence and predicts the location of sub-cells with pictures
  • Protein subcellular localisation prediction targets the microscopic fluorescence images of proteins in cancer tissues and other tissues to identify the localisation of proteins; to find location markers related to cancer
0 Comments

Retinal blood vessel segmentation in the eyeground

3/3/2020

0 Comments

 
  • The fundus retinal blood vessel segmentation application was developed for the Atlas 200 DK inference system, in partnership with the Nankai University, led by Professor Li Tao of Intelligent Computing System Research Office .
  • This project makes full use of the neural network computing power of the Atlas 200 DK system to segment the fundus vessels in real-time.
  • The total inference time of 20 pictures is 761.8 milliseconds, and the average inference time of one image is 38 milliseconds.
0 Comments
<<Previous

    Archives

    April 2021
    March 2021
    January 2021
    November 2020
    September 2020
    August 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020

    Categories

    All
    Artificial Intelligence
    Communication
    Data Centre
    Hpc
    Huawei
    Laptop
    Medicine
    Smartphone
    Supercomputer

    RSS Feed

Proudly powered by Weebly
  • Home
  • About
  • Curriculum Vitae
  • Blog
  • Contact