#Topological ordering algorithm
Explore tagged Tumblr posts
Text
Trapped-Ion Quantum Computing Solved Protein Folding Issues

Ion-trapped quantum computing
Quantum Computing Solve Complex Protein Folding and Optimisation Problems
Quantum computing advanced when researchers developed a new quantum algorithm on trapped-ion computers to solve combinatorial optimisation and protein folding problems. This work is the largest quantum hardware implementation of protein folding and shows how quantum systems can outperform traditional computers on difficult problems.
The work by Kipu Quantum GmbH and IonQ Inc. used a 36-qubit trapped-ion processor to mimic protein folding for peptides up to 12 amino acids. Computational biology still struggles to reliably predict protein structures, which affects materials research and medication development. For this complex situation, classical approaches are limited.
The application of non-variational bias-field digitised counterdiabatic quantum optimisation (BF-DCQO). This method uses trapped-ion systems' intrinsic all-to-all connection to explore the solution space of difficult higher-order unconstrained binary optimisation (HUBO) issues.
HUBO problems are difficult optimisation challenges. The BF-DCQO method solves challenging HUBO problems optimally on fully connected trapped-ion quantum processors. This method always worked best for dense HUBO situations.
Protein folding and fully connected spin-glass and MAX 4-SAT problems were utilised to demonstrate the algorithm's flexibility on all 36 qubits. Interestingly, they resolved MAX 4-SAT problems during the computational phase changeover. The quantum algorithm's resolution of this phase transition, a tough point for classical algorithms, shows its capacity to address problems approaching the boundaries of classical computation. This suggests that quantum algorithms may outperform regular methods for certain tasks.
The non-variational character and solution space navigation technique of the BF-DCQO algorithm are notable. BF-DCQO may be more deterministic than probabilistic quantum methods for specific problem classes. This direct method reduces repeated measurements and post-processing, improving computer efficiency.
Modelling protein folding systems with 12 amino acids is a major advancement, exceeding earlier quantum hardware implementations and increasing computing power. The quantum technique improves these computationally intensive simulations, allowing researchers to study protein structures in unprecedented depth.
Researchers meticulously constructed and polished the BF-DCQO algorithm to maximise trapped-ion quantum processor power. All-to-all connectivity allows complex quantum circuits to avoid topologies with sparser connections. The algorithm's error characteristics demonstrated that it is robust to numerous types of faults, making it suitable for noisy quantum devices. Additionally, strategies were developed to mitigate the main error causes.
This research suggests that the BF-DCQO algorithm can provide a quantum advantage for dense HUBO challenges, especially when combined with scalable trapped-ion quantum devices. The demonstration that quantum computing can outperform classical algorithms on problems that conventional computers cannot solve marked a turning point in its progress. The algorithm's adaptability to a variety of optimisation problems shows its promise to solve real-world challenges in drug development and other fields like financial modelling.
Scalability was considered to make the method compatible with larger quantum processors without major adjustments. The team is developing ways to spread the algorithm over multiple quantum processors to boost its scalability. The BF-DCQO algorithm's implementation has been meticulously documented to aid future research and instruct other researchers who want to replicate the findings.
The researchers believe quantum technology and algorithm design will enable future answers to much larger and more difficult problems. This constant evolution should enable new technical and scientific advances.
Quantum Zeitgeist, an online journal covering quantum computing news, research, and breakthroughs, covered this discovery. The book helps researchers and businesses understand and use quantum computing to solve insoluble problems in various industries. This work, which employs quantum mechanics to do complex computations tenfold faster than traditional computers, supports the publication's mission of covering how quantum technologies are changing the future.
#Trappedionquantumcomputing#IonQ#quantumalgorithms#BFDCQOalgorithm#quantumprocessors#quantumhardware#News#Technews#Technology#Technologynews#Technologytrends#Govindhtech
0 notes
Text
IEEE Transactions on Evolutionary Computation, Volume 29, Issue 3
1) Guest Editorial Machine-Learning-Assisted Evolutionary Computation
Author(s): Rong Qu, Nelishia Pillay, Emma Hart, Manuel López-Ibáñez
Pages: 571 - 573
2) A Deep Reinforcement Learning-Assisted Multimodal Multiobjective Bilevel Optimization Method for Multirobot Task Allocation
Author(s): Yuanyuan Yu, Qirong Tang, Qingchao Jiang, Qinqin Fan
Pages: 574 - 588
3) An Iterated Greedy Algorithm With Reinforcement Learning for Distributed Hybrid Flowshop Problems With Job Merging
Author(s): Xin-Rui Tao, Quan-Ke Pan, Liang Gao
Pages: 589 - 600
4) Surrogate-Assisted Multiobjective Gene Selection for Cell Classification From Large-Scale Single-Cell RNA Sequencing Data
Author(s): Jianqing Lin, Cheng He, Hanjing Jiang, Yabing Huang, Yaochu Jin
Pages: 601 - 615
5) Dealing With Structure Constraints in Evolutionary Pareto Set Learning
Author(s): Xi Lin, Xiaoyuan Zhang, Zhiyuan Yang, Qingfu Zhang
Pages: 616 - 630
6) A Two-Population Algorithm for Large-Scale Multiobjective Optimization Based on Fitness-Aware Operator and Adaptive Environmental Selection
Author(s): Bingdong Li, Yan Zhang, Peng Yang, Xin Yao, Aimin Zhou
Pages: 631 - 645
7) Protein Structure Prediction Using a New Optimization-Based Evolutionary and Explainable Artificial Intelligence Approach
Author(s): Jun Hong, Zhi-Hui Zhan, Langchong He, Zongben Xu, Jun Zhang
Pages: 646 - 660
8) Multiobjective Mixed-Integer Quadratic Models: A Study on Mathematical Programming and Evolutionary Computation
Author(s): Ofer M. Shir, Michael Emmerich
Pages: 661 - 675
9) A Survey on Evolutionary Computation-Based Drug Discovery
Author(s): Qiyuan Yu, Qiuzhen Lin, Junkai Ji, Wei Zhou, Shan He, Zexuan Zhu, Kay Chen Tan
Pages: 676 - 696
10) Linear Subspace Surrogate Modeling for Large-Scale Expensive Single/Multiobjective Optimization
Author(s): Langchun Si, Xingyi Zhang, Ye Tian, Shangshang Yang, Limiao Zhang, Yaochu Jin
Pages: 697 - 710
11) A Classifier-Ensemble-Based Surrogate-Assisted Evolutionary Algorithm for Distributed Data-Driven Optimization
Author(s): Xiao-Qi Guo, Feng-Feng Wei, Jun Zhang, Wei-Neng Chen
Pages: 711 - 725
12) Improving the Efficiency of the Distance-Based Hypervolume Estimation Using ND-Tree
Author(s): Andrzej Jaszkiewicz, Piotr Zielniewicz
Pages: 726 - 733
13) A Cooperative Ant Colony System for Multiobjective Multirobot Task Allocation With Precedence Constraints
Author(s): Tong Qian, Xiao-Fang Liu, Yongchun Fang
Pages: 734 - 748
14) Evolutionary Trainer-Based Deep Q-Network for Dynamic Flexible Job-Shop Scheduling
Author(s): Yun Liu, Fangfang Zhang, Yanan Sun, Mengjie Zhang
Pages: 749 - 763
15) MOEA/D With Spatial–Temporal Topological Tensor Prediction for Evolutionary Dynamic Multiobjective Optimization
Author(s): Xianpeng Wang, Yumeng Zhao, Lixin Tang, Xin Yao
Pages: 764 - 778
16) A Surrogate-Assisted Evolutionary Framework for Expensive Multitask Optimization Problems
Author(s): Shenglian Tan, Yong Wang, Guangyong Sun, Tong Pang, Ke Tang
Pages: 779 - 793
17) Improved Evolutionary Multitasking Optimization Algorithm With Similarity Evaluation of Search Behavior
Author(s): Xiaolong Wu, Wei Wang, Tengfei Zhang, Honggui Han, Junfei Qiao
Pages: 794 - 808
18) Competitive Multitasking for Computational Resource Allocation in Evolutionary-Constrained Multiobjective Optimization
Author(s): Xiaoliang Chu, Fei Ming, Wenyin Gong
Pages: 809 - 821
19) Fractional Order Differential Evolution
Author(s): Kaiyu Wang, Shangce Gao, MengChu Zhou, Zhi-Hui Zhan, Jiujun Cheng
Pages: 822 - 835
20) An Interval Multiobjective Evolutionary Generation Algorithm for Product Design Change Plans in Uncertain Environments
Author(s): Rui-Zhao Zheng, Yong Zhang, Xiao-Yan Sun, Dun-Wei Gong, Xiao-Zhi Gao
Pages: 836 - 850
0 notes
Text
Study proposes a new theoretical framework for understanding complex higher-order networks
Filippo Radicchi, professor of Informatics at the Luddy School of Informatics, Computing, and Engineering, co-authored a ground-breaking study that could lead to the development of new AI algorithms and new ways to study brain function. The study, titled “Topology shapes dynamics of higher-order networks,” and published in Nature Physics, proposed a theoretical framework specifically designed…
0 notes
Text
Lithion's Groundbreaking BMS A Boost for 51V Battery Assemblers in L3 Electric Vehicles
With technological breakthroughs opening the door to effective and environmentally friendly transportation options, the electric vehicle (EV) market has been undergoing a revolution. Battery Management Systems (BMS) are the unsung heroes of these developments, guaranteeing the longevity, safety, and effectiveness of battery packs. Lithion has introduced a bespoke BMS made to meet the specific requirements of 51V battery pack assemblers for Level 3 (L3) electric vehicles.
Understanding the Role of a BMS in EVs
Any EV must have a battery management system, which serves as the battery pack's brain. It guarantees ideal charging and discharging, keeps an eye on and regulates the performance of individual cells, and protects against possible risks like short circuits, overcharging, and overheating. A strong and specific BMS is essential for L3 electric vehicles, which have higher power and reliability requirements.
Why 51V Batteries for L3 EVs?
Because 51V battery systems balance energy density, safety, and performance, they have become more and more popular in the L3 EV market. These batteries preserve efficiency and controllable costs while supplying the power needed for medium- to heavy-duty vehicles, including delivery vans, e-buses, and ride-hailing cars. However, putting together these battery packs presents a unique set of difficulties:
Precision Balancing: Ensuring uniform performance across all cells.
Thermal Management: Preventing overheating in high-demand scenarios.
Safety Measures: Guarding against electrical hazards.
Scalability: Supporting modular designs for different vehicle types.
Lithion’s Tailored Solution
The new BMS from Lithion was created especially to satisfy the needs of 51V battery assemblers in the L3 EV market. This is how it is unique:
1. Advanced Cell Balancing
Modern cell balancing algorithms in Lithion's BMS guarantee that every cell in the 51V battery pack operates consistently. This guarantees steady vehicle performance in addition to extending the battery's lifespan.
2. Enhanced Safety Protocols
The system has several layers of safety features, including overcharge avoidance, short-circuit protection, and real-time heat monitoring. This is especially important for L3 vehicles that work in harsh conditions.
3. Scalability and Modularity
Because Lithion's BMS is made to work with modular battery pack topologies, assemblers may quickly modify the system to fit different car types and setups.
4. IoT Integration for Smart Monitoring
Remote monitoring and diagnostics are made possible by the BMS's smooth integration with IoT systems. Fleet managers can monitor battery status, anticipate maintenance requirements, and maximize operational effectiveness with this functionality.
5. Compliance with Industry Standards
In order to facilitate assemblers' international marketing of their systems, Lithion makes sure that their BMS satisfies international safety and performance criteria.
Benefits for Battery Pack Assemblers
By adopting Lithion's specialized BMS, battery pack assemblers can achieve:
Reduced Development Time: Pre-designed features tailored for 51V systems simplify integration.
Lower Costs: Improved efficiency and reduced waste during assembly.
Increased Reliability: Enhanced safety and performance boost end-user confidence.
Scalable Production: Modular design supports diverse application needs.
Driving the Future of L3 EVs
The L3 electric car market is expected to rise significantly as the need for environmentally friendly transportation increases. This expansion is accelerated by innovations such as Lithion's customized BMS, which equips battery pack assemblers with the necessary tools for success.
Lithion's innovative BMS, which prioritizes safety, effectiveness, and adaptability, not only tackles present industry issues but also establishes a standard for future developments. This is a positive step for EV manufacturers and battery pack assemblers alike in building a sustainable and electrified future.
For more information Lithion power BMS
#lithium battery#bms#battery management system#lithion#lithion power#batterymanagementsystem#electricvehicle#ev#lithionpower
0 notes
Text
Solved:Lab 12: Topological Sort
In this lab, we will implement an algorithm for topological sorting. When a graph structure (i.e. a set of nodes and edges) is given, your program prints a list of nodes as a result of topological sort. As we have discussed in class, topological sorting needs queue ADT in order to save the nodes that do not have any in-degree during the sorting process. 1. Input and Output Read a set of vertices…
View On WordPress
0 notes
Text
i'm conducting an experiment on how to study the theory effectively
there are i guess two main ways:
(1) read and take notes simultaneously
(2) read first, then take notes
so for the first one, there is the risk of going passive with the note-taking, writing down the symbols without focusing on their meaning. for the second one there is the risk of zoning out and just reading the symbols, again, losing their meaning
the problem seems to be that the processing of sheer symbols and processing their meanings might be disjoint and their natural tendency seems to be so
from my recent actions i noticed that (1) doesn't work for me as effectively as (2)
it might be that when i don't plan to write something down right away, i am more inclined to remember these things short-term as "i won't be able to check it later so remember it now in order to understand what comes next", and when i'm taking notes simultaneously it's "i have it written down anyway so i can take a peek anytime"
so now i'm testing the strategy of
read → try to understand the idea and memorize the elements → why all the elements are important → understand the construction in more detail and write it down
this is how i imagine my mind working:

it means that at first i start to remember the elements as points of its own but simultaneously my brain builds its idea on how they interact and then i notice the inner structure of how the elements are connected with each other in less obvious ways
this idea is cool to visualize how i imagine my thinking, because it shows how learning the topic reduces possible permutations and paths. i have this problem that when i start learning something new i see so many possibilities of what can happen to the elements that i can't discern between crucial and additional stuff. in order to use the knowledge i need to provide some structure

thus the main goal of optimized learning is to take the leap from "i memorized the elements" to "i understand their structure" as fast as possible
and so the strategy (2) might be more effective as it forces the memorization of the elements first and then it is easier to provide structure for them, where i would be defining order on something that's already in my mind. whereas (1) strikes at memorization and structuring simultaneously, it is too difficult for me to see at first in which direction the topic is going, i must know the next point
in a few days i will focus on how "the point" can be defined in this and how to characterize the connections
honestly tho this is some sorta pseudo graph theory and pseudo topology and i don't believe this could be as straightforward. otherwise nobody would ever post any study tips and we would have a field of study called "learning optimazation", this would be too big to go unnoticed. i wish it was so easy to just know how brain works and be able to build such an algorithm that would optimize the desired processes lmao
i wish i was a σ-field or something
side not is, i love this kind of thinking and i love to analyze how the thinking works, especially when it can me algorithmized or structured in some ways. the moment i see something is structured or algorithmic it becomes interesting to me
3 notes
·
View notes
Text
The control laws, cockpit/bridge visual display, reference/guide stars, and star maps for navigation and guidance in interstellar space have not been developed for either type of FTL space warp. Navigation and guidance systems will have to be developed separately for warp drives and traversable wormholes because of their different implementations. D. G. Hoag and W. Wrigley69 studied navigation and guidance for relativistic interstellar missions, and G. Vulpetti70 studied relativistic navigation using the 3-dimensional rocket equation. These studies could provide a starting point to begin an equivalent study for FTL interstellar missions. Of further interest is how the forward and aft starfields appear to starship crews
Of further interest is how the forward and aft starfields appear to starship crews who visually monitor their flight progress using electronic visual displays and/or windows during FTL flight or while traversing a wormhole. L. H. Ford and T. A. Roman12 and C. Clark, W. A. Hiscock and S. L. Larson71 show that for a warp drive starship at FTL speed, the angular deflection and redshift of photons propagating through the distortion of the warp bubble is such that stars in the forward and reverse hemispheres will appear closer to the direction of motion than they would to an observer at rest. The stars in the forward direction will be strongly blueshifted and in the aft direction they will be strongly redshifted. The light from stars directly overhead, underneath or to the sides remains unaffected by the aberration. This aberration is qualitatively similar to that caused by SR for the case of relativistic rockets.72-74 This suggests that visual guide/reference stars and typical star maps will be useless for warp drive starship navigation. Real-time electronic visual displays will be required to display accurate virtual starfields and maps, and they must have computer algorithms that perform real-time adjustments to account for the effects of FTL aberration in order to display visually meaningful views and maps.
The view through a traversable wormhole is even worse. The negative energy density threading a wormhole throat produces repulsive gravity, which deflects light rays going through and around the throat.75 The entrance to the (spherically symmetric) wormhole would look like a sphere that contained the mirror image of a whole other universe or remote region within our universe, incredibly shrunken and distorted. This is a topological inversion of images manifested by a spherically symmetric wormhole geometry. The spherical wormhole entrance/exit (a.k.a. the throat) is called a hypersphere because it is the hyperspace surface of (3+1)-dimensional spacetime. If one were to travel through the wormhole and look back at it from the other side, then they would see a sphere (the entry way back home) that seemed to contain their whole original universe or their home region of space near Earth. This would look just like a glass Christmas tree ornament, which is just a spherical mirror that reflects, in principle, the entire universe around it. A flat-face traversable wormhole would not distort the image of the remote space region or other universe seen through it because the negative energy density at the throat is zero as seen and felt by light and matter passing through it.
Eric Davis is already building the design for the starship bridge for his theoretical warp drives in his head but nothing I can really interpret about the logistics of this thing, How much harder is it to move large things, what difference is there between heading to Alpha Centauri and heading to Andromeda. I need a materialist analysis of faster-than-light travel here.
4 notes
·
View notes
Text
Probable Root Cause: Improving Instana’s Observability

We are happy to report that Instana has been improved with the addition of the probable root cause capability, which is currently accessible in public preview as of version 277. Superior insights are provided by this feature, which makes it possible to quickly identify the cause of a system failure with little to no inquiry time.
The expensive nature of business application interruptions has been repeatedly demonstrated. Since organisations are actively embracing digitisation, the estimated cost of an average outage can reach USD 50,000 to USD 500,000 per hour, and higher. Because applications are becoming more complicated, it takes hours, sometimes even days, for Site Reliability Engineers (SREs) to find and fix issues.
IBM has included the Probable Root Cause capability to Intelligent Incident Remediation from Instana in order to help with this issue. When an incident is created, Instana automatically uses Causal AI to analyse call statistics, topology, and surrounding data in order to rapidly and effectively determine the most likely cause of the application failure. This saves SREs numerous hours of labour and prevents significant costs for the company by enabling them to address issues by focussing on the cause of the issue rather than just its symptoms.
Probable Root Cause
The public preview of Probable Root Cause is currently accessible. When a Probable Root Cause is found for an incident, the following entity types’ Incidents page in Instana for Smart Alert is now improved with the Probable Root Cause section:
Application perspectives
Services
Endpoints
Service Level Objectives on application perspectives
Any entity that is monitored by Instana, such as a process, endpoint, or service, can be the determined Probable Root Cause entity included in this section. The following details are also included in this section:
A likelihood level for the outcome and statistics about trace data surrounding the entity that give some justification for choosing a specific Probable Root Cause Event, such as Change events, Issues, or Incidents that happened on the identified Probable Root Cause entity
Go to an application’s, service’s, or endpoint’s Incidents page to view the Probable Root Causes that have been found by Smart Alerts.
In collaboration with IBM Research, IBM developed an algorithm that, once an incident has been triggered, uses differential observability and causal AI to analyse data modalities like traces and topology to detect unhealthy entities. Any part of a system that is monitored with Instana’s support for more than 300 technologies is referred to as an entity. Through the examination of diverse data modalities spanning your infrastructure, apps, and services, IBM is capable of pinpointing the probable reasons for application outages and directing you towards dashboards that will facilitate your inquiry more quickly.
IBM also enhance this data by presenting all of the latest occurrences related to the identified potential root cause entity, which could be reasons why this entity failed. IBM also provide a transparent explanation for their AI’s identification of an entity as the Probable Root Cause. Additionally, Probable Root Cause easily points you in the direction of pertinent data, traces, and logs to expedite additional problem diagnostics.
At the moment, all incidents brought on by smart alerts on the following object kinds automatically run probable root cause analyses:
Principal advantages
Instant intelligence: Probable root cause works nearly instantly right out of the box, in contrast to conventional methods that need a lot of setup and training. You may immediately begin to reap the benefits of improved observability, regardless of whether you’re employing self-hosted deployment or software as a service.
Comprehensive insights: With Instana’s extensive data coverage, you can see your entire stack with never-before-seen clarity. Probable Root Cause takes into account every element of your infrastructure, from frontend to backend, microservices to databases, in order to provide precise diagnoses.
Explainable results: Transparency is at the heart of Instana’s strategy. IBM give your teams trustworthy, actionable insights by giving them transparent access to the data sources and methodology used to identify likely root causes.
Safe data protection: Probable Root Cause ensures the confidentiality and security of your important data by providing insights without allowing the data to ever leave Instana.
Probable root cause in action
This is a preview of how probable root cause analysis might help you locate issues more quickly in the Instana incident dashboard.Image credit to IBM
In this example, a sudden spike in the quantity of incorrect calls triggers an application smart alert.
From the smart alert, Instana automatically determines the root cause entity (an endpoint in this case), offers further explanation for the error, and records any related events that transpire on that entity. This makes it possible for the user to identify the incident’s root cause and prioritise fixing it.
Start now
IBM cordially encourage you to investigate the potential of possible root cause in your local setting. This feature promises to take your debugging skills to the next level and offer a smooth experience, regardless of whether you are an experienced Instana user or are investigating observability options for the first time.
See IBM’s release notes and documentation for comprehensive guidance and instructions on how to make the most use of this feature, as well as for further information about probable root cause.
At Instana, IBM dedication to providing state-of-the-art observability solutions that de-mystify complexity and enable teams to create and manage robust systems never wavers. As IBM continue to innovate in the fields of observability and application performance monitoring (APM), keep checking back for additional updates.
Read more on govindhtech.com
#ProbableRootCause#ImprovingInstana#Observability#documentation#IBMhasincluded#Instana#improvedobservability#ibm#releasenotes#observabilitysolutions#technology#technews#news#govindhtech
0 notes
Text
Soft Computing, Volume 29, Issue 4, February 2025
1) A map-reduce algorithm to find strongly connected components of directed graphs
Author(s): Fujun Ji, Jidong Jin
Pages: 1947 - 1966
2) Complex preference analysis: a score-based evaluation strategy for ranking and comparison of the evolutionary algorithms
Author(s): Debojyoti Sarkar, Anupam Biswas
Pages: 1967 - 1980
3) Weighted rank aggregation based on ranker accuracies for feature selection
Author(s): Majid Abdolrazzagh-Nezhad, Mahdi Kherad
Pages: 1981 - 2001
4) Exploring diversity and time-aware recommendations: an LSTM-DNN model with novel bidirectional dynamic time warping algorithm
Author(s): Te Li, Liqiong Chen, Kaiwen Zhi
Pages: 2003 - 2013
5) Cyber-attack detection based on a deep chaotic invasive weed kernel optimized machine learning classifier in cloud computing
Author(s): M. Indrasena Reddy, A. P. Siva Kumar, K. Subba Reddy
Pages: 2015 - 2030
6) A novel instance density-based hybrid resampling for imbalanced classification problems
Author(s): You-Jin Park, Chung-Kang Ma
Pages: 2031 - 2045
7) A bi-objective multi-warehouse multi-period order picking system under uncertainty: a benders decomposition approach
Author(s): Fatemeh Nikkhoo, Ali Husseinzadeh Kashan, Bakhtiar Ostadi
Pages: 2047 - 2074
8) A two-population artificial tree algorithm based on adaptive updating strategy for dominant populations
Author(s): Yaping Xiao, Linfeng Niu, Qiqi Li
Pages: 2075 - 2106
9) Multi-ant colony algorithm based on the Stackelberg game and incremental learning
Author(s): Qihuan Wu, Xiaoming You, Sheng Liu
Pages: 2107 - 2128
10) Review of quantum algorithms for medicine, finance and logistics
Author(s): Alessia Ciacco, Francesca Guerriero, Giusy Macrina
Pages: 2129 - 2170
11) A novel attention based deep learning model for software defect prediction with bidirectional word embedding system
Author(s): M. Chitra Devi, T. Dhiliphan Rajkumar
Pages: 2171 - 2188
12) Modeling and analysis of data corruption attacks and energy consumption effects on edge servers using concurrent stochastic games
Author(s): Abdelhakim Baouya, Brahim Hamid, Saddek Bensalem
Pages: 2189 - 2214
13) Enhanced TODIM-TOPSIS framework for design quality evaluation for college smart sports venues under hesitant fuzzy sets
Author(s): Feng Yang, Yuefang Wu, Yi Li
Pages: 2215 - 2227
14) New Vigenere method with pseudo-random affine functions for color image encryption
Author(s): Hamid El Bourakkadi, Abdelhakim Chemlal, Abdelhamid Benazzi
Pages: 2229 - 2245
15) Adopting fuzzy multi-criteria decision-making ranking approach ensuring connected topology in industrial wireless sensor networks
Author(s): Anvita Nandan, Itu Snigdh
Pages: 2247 - 2261
16) Leveraging feature fusion ensemble of VGG16 and ResNet-50 for automated potato leaf abnormality detection in precision agriculture
Author(s): Amit Kumar Trivedi, Tripti Mahajan, Shailendra Tiwari
Pages: 2263 - 2277
17) Deteriorating inventory model with advance-cash-credit payment schemes and partial backlogging
Author(s): Chun-Tao Chang, Mei-Chuan Cheng, Liang-Yuh Ouyang
Pages: 2279 - 2295
18) Reliability analysis of discrete-time multi-state star configuration power grid systems with performance sharing
Author(s): Peng Su, Keyong Zhang, Honghua Shi
Pages: 2297 - 2310
19) Secure transmission of medical image using a wavelet interval type-2 TSK fuzzy brain-imitated neural network
Author(s): Duc-Hung Pham, Tuan-Tu Huynh, Van-Phong Vu
Pages: 2311 - 2329
20) Enhanced single shot detector for small object detection in drone-capture scenarios
Author(s): Yanxia Shi, Yanrong Liu, Yaru Liu
Pages: 2331 - 2341
21) A deep learning-based model for automated STN localization using local field potentials in Parkinson’s disease
Author(s): Mohamed Hosny, Mohamed A. Naeem, Yili Fu
Pages: 2343 - 2362
22) A lightweight CNN model for UAV-based image classification
Author(s): Xinjie Deng, Michael Shi, Chee Peng Lim
Pages: 2363 - 2378
23) Gender opposition recognition method fusing emojis and multi-features in Chinese speech
Author(s): Shunxiang Zhang, Zichen Ma, Kuan-Ching Li
Pages: 2379 - 2390
24) RETRACTED ARTICLE: Near-infrared and visible light face recognition: a comprehensive survey
Author(s): Fangzheng Huang, Xikai Tang, Dayan Ban
Pages: 2391 - 2391
25) Retraction Note: Classification of noiseless corneal image using capsule networks
Author(s): H. James Deva Koresh, Shanty Chacko
Pages: 2393 - 2393
26) Retraction Note: Enhancing performance of cell formation problem using hybrid efficient swarm optimization
Author(s): G. Nagaraj, Manimaran Arunachalam, S. Paramasamy
Pages: 2395 - 2395
27) Retraction Note: IADF security: insider attack detection using fuzzy logic in wireless multimedia sensor networks
Author(s): Ashwinth Janarthanan, Dhananjay Kumar, C. B. Divya Parvathe
Pages: 2397 - 2397
1 note
·
View note
Text
Iris Publishers - Global Journal of Engineering Sciences (GJES)
Artificial Neural Networks and Hopfield Type Modeling
Authored by Haydar Akca

From the mathematical point of view, an artificial neural network corresponds to a non- linear transformation of some inputs into certain outputs. Many types of neural networks have been proposed and studied in the literature and the Hopfield-type network has be- come an important one due to its potential for applications in various fields of daily life.
A neural network is a network that performs computational tasks such as associative memory, pattern recognition, optimization, model identification, signal processing, etc. on a given pattern via interaction between a number of interconnected units characterized by simple functions. From the mathematical point of view, an artificial neural network corresponds to a nonlinear transformation of some inputs into certain outputs. There are a number of terminologies commonly used for describing neural networks. Neural networks can be characterized by an architecture or topology, node characteristics, and a learning mechanism [1]. The interconnection topology consists of a set of processing elements arranged in a particular fashion. The processing elements are connected by links and have weights associated with them. Each processing elements is associated with:
• A state of activation (state variable)
• An output function (transfer function)
• A propagation rule for transfer of activation between processing elements
• An activation rule, which determines the new state of activation of a processing element from its inputs weight associated with the inputs, and current activation.
Neural networks may also be classified based on the type of input, which is either binary or continuous valued, or whether the networks are trained with or without supervision. There are many different types of network structures, but the main types are feed-forward networks and recurrent networks. Feed-forward networks have unidirectional links, usually from input layers to output layers, and there are no cycles or feedback connections. In recurrent networks, links can form arbitrary topologies and there may be arbitrary feed- back connections. Recurrent neural networks have been very successful in time series prediction. Hopfield networks are a special case of recurrent networks. These networks have feedback connections, have no hidden layers, and the weight matrix is symmetric.
Neural networks are analytic techniques capable of predicting new observations from other observations after executing a process of so-called learning from existing data. Neural network techniques can also be used as a component of analysis designed to build explanatory models. Now there is neural network software that uses sophisticated algorithms directly contributing to the model building process.
In 1943, neuro physiologist Warren McCulloch and mathematician Walter Pitts [2] wrote a paper on how neurons might work. In order to describe how neurons in the brain might work, they modeled a simple neural network using electrical circuits. As computers be- came more advanced in the 1950’s, it was possible to simulate a hypothetical neural net- work. In 1982, John Hopfield presented a paper [3]. His approach was to create more useful machines by using bidirectional lines. The model proposed by Hopfield, also known as Hopfield’s graded response neural network, is based on an analogue circuit consisting of capacitors, resistors and amplifiers. Previously, the connections between neurons was only one way. At the same years, scientist introduced a “Hybrid network” with multiple layers, each layer using a different problem-solving strategy.
Now, neural networks are used in several applications. The fundamental idea behind the nature of neural networks is that if it works in nature, it must be able to work in computers. The future of neural networks, though, lies in the development of hardware. Research that concentrates on developing neural networks is relatively slow. Due to the limitations of processors, neural networks take weeks to learn. Nowadays trying to create what is called a “silicon compiler”, “organic compiler” to generate a specific type of integrated circuit that is optimized for the application of neural networks. Digital, analog, and optical chips are the different types of chips being developed.
The brain manages to perform extremely complex tasks. The brain is principally com- posed of about 10 billion neurons, each connected to about 10,000 other neurons. Each neuronal cell bodies (soma) are connect with the input and output channels (dendrites and axons). Each neuron receives electrochemical inputs from other neurons at the dendrites. If the sum of these electrical inputs is sufficiently powerful to activate the neuron, it transmits an electrochemical signal along the axon, and passes this signal to the other neurons whose dendrites are attached at any of the axon terminals. These attached neurons may then fire. It is important to note that a neuron fires only if the total signal received at the cell body exceeds a certain level. The neuron either fires or it doesn’t, there aren’t different grades of firing. So, our entire brain is composed of these interconnected electro- chemical transmitting neurons. This is the model on which artificial neural networks are based. Thus for, artificial neural networks haven’t even come close to modeling the complexity of the brain, but they have shown to be good at problems which are easy for a human but difficult for a traditional computer, such as image recognition and predictions based on past knowledge.
Fundamental difference between traditional computers and artificial neural networks is the way in which they function. One of the major advantages of the neural network is its ability to do many things at once. With traditional computers, processing is sequential– one task, then the next, then the next, and so on. While computers function logically with a set of rules and calculations, artificial neural networks can function via Equation, pictures, and concepts. Based upon the way they function, traditional computers have to learn by rules, while artificial neural networks learn by example, by doing something and then learning from it.
Hopfield neural networks have found applications in a broad range of disciplines [3-5] and have been studied both in the continuous and discrete time cases by many researchers. Most neural networks can be classified as either continuous or discrete. In spite of this broad classification, there are many real-world systems and natural processes that behave in a piecewise continuous style interlaced with instantaneous and abrupt changes (impulses). Periodic dynamics of the Hopfield neural networks is one of the realistic and attractive modellings for the researchers. Hopfield networks are a special case of recurrent networks. These networks have feedback connections, have no hidden layers, and the weight matrix is symmetric. These networks are most appropriate when the input can be represented in exact binary form. Signal transmission between the neurons causes time delays. Therefore, the dynamics of Hopfield neural networks with discrete or distributed delays has a fundamental concern. Many neural networks today use less than 100 neurons and only need occasional training. In these situations, software simulation is usually found sufficient. Expected and optimistic development on all current neural network’s technologies will improve in very near future and researchers develop better methods and network architectures.
In the present paper, we briefly summarized historical background as well as developments of the artificial neural networks and present recent formulations of the continuous and discrete counterpart of a class of Hopfield-type neural networks modeling using functional differential equations in the presence of delay, periodicity, impulses and finite distributed delays. Combining some ideas of [4,6-10] and [11], we obtain a sufficient condition for the existence and global exponential stability of a unique periodic solution of the discrete system considered.
Artificial Neural Networks (ANN)
An artificial neural network (ANN) is an information processing paradigm that is in- spired by the way biological nervous systems, such as the brain, process information sees more details [12] and references given therein. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well.
The first artificial neuron was produced in 1943 by the neurophysiologist Warren McCulloch and the logician Walter Pitts [2]. But the technology available at that time did not allow them to do too much. Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements (neurons) working in parallel to solve a specific problem. Neural net- works learn by example. Much is still unknown about how the brain trains itself to process information, so theories abound. An artificial neuron is a device with many inputs and one output (Figure 1). The neuron has two modes of operation; the training mode and the using mode. In the training mode, the neuron can be trained to fire (or not), for particular input patterns. In the using mode, when a taught input pattern is detected at the input, its associated output becomes the current output. If the input pattern does not belong in the taught list of input patterns, the firing rule is used to determine whether to fire or not. An important application of neural networks is pattern recognition. Pattern recognition can be implemented by using a feed-forward (Figure 2) neural network that has been trained accordingly. During training, the network is trained to associate outputs with in- put patterns. When the network is used, it identifies the input pattern and tries to output the associated output pattern. The power of neural networks comes to life when a pattern that has no output associated with it, is given as an input. In this case, the network gives the output that corresponds to a taught input pattern that is least different from the given pat- tern. Hopfield-type neural networks are mainly applied either as associative memories or as optimization solvers. In both applications, the stability of the networks is prerequisite. The equilibrium points (stable states) of networks characterize all possible optimal solutions of the optimization problem, and stability of the network’s grantee the convergence to the optimal solutions. Therefore, the stability is fundamental for the network design. As a result of this fact the stability analysis of the Hopfield-type networks has received extensive attention from the many researchers, [4,6-9,11,13] and references given therein. The above neuron does not do anything that conventional computers do not already do. A more sophisticated neuron (Figure 3) is the McCulloch and Pitts model (MCP). The difference from the previous model is that the inputs are ‘weighted’, the effect that each input has at decision making is dependent on the weight of the particular input. The weight of an input is a number which when multiplied with the input gives the weighted input. These weighted inputs are then added together and if they exceed a pre-set threshold value, the neuron fires. In any other case the neuron does not fire. In mathematical terms, the neuron fires if and only if
X1W1 + X22 + X3W3 + …. > T,
where Wi, i = 1, 2, . . ., are weights, Xi, i = 1, 2, . . ., inputs, and T a threshold. The addition of input weights and of the threshold makes this neuron a very flexible and powerful one. The MCP neuron has the ability to adapt to a particular situation by changing its weights and/or threshold. Various algorithms exist that cause the neuron to ‘adapt’; the most used ones are the Delta rule and the back-error propagation. The former is used in feed-forward networks and the latter in feedback networks.
Neural networks have wide applicability to real world business problems. In fact, they have already been successfully applied in many industries. Since neural networks are best at identifying patterns or trends in data, they are well suited for prediction or forecasting needs including sales forecasting, industrial process control, customer research, data validation, risk management, target marketing.
ANN are also used in the following specific paradigms: recognition of speakers in communications; diagnosis of hepatitis; recovery of telecommunications from faulty software; interpretation of multi-meaning Chinese words; undersea mine detection; texture analysis; three-dimensional object recognition; hand-written word recognition; and facial recognition.
To read more about this article https://irispublishers.com/gjes/fulltext/artificial-neural-networks-and-hopfield.ID.000601.php
Indexing List of Iris Publishers: https://medium.com/@irispublishers/what-is-the-indexing-list-of-iris-publishers-4ace353e4eee
Iris publishers google scholar citations: https://scholar.google.co.in/scholar?hl=en&as_sdt=0%2C5&q=irispublishers&btnG=
1 note
·
View note
Photo

Rate this pavilion from 1-10?Trabeculae Pavilion is a lightweight architecture that fuses advancements in Additive Manufacturing with bio-inspired computational design. Designed by Actlab.Polimi.it in 2018, the project looks into 3D Printing for answers to the emerging problem of scarcity in material resources. The design is based on a computational process that finds inspiration in Nature, specifically in the materialization logics of the trabeculae, the internal cells that form the bone microstructure. From this investigation, custom algorithms have been developed to support the creation of a cellular load-responsive structure with continuous variations in sizing, topology, orientation and section, in order to maximize material efficiency. Post by: @hamithz & @pa.next ——————————————————————— * Submit your project to publish on PA (+458K) * Turn ON Post Notifications to see new content * Instagram 👉🏼 instagram.com/parametric.architecture * Linkedin: 👉🏼 linkedin.com/company/parametric.architecture * Website: 👉🏼 www.parametric-architecture.com * Facebook: 👉🏼 facebook.com/parametric.archi * Pinterest: 👉🏼 pinterest.com/parametricarchitecture * YouTube: 👉🏼 youtube.com/parametricarchitecture * Twitter: 👉🏼 twitter.com/parametricarch * Snapchat: 👉🏼 snapchat.com/paarchitecture * Tumblr: 👉🏼 parametricarchitecture.tumblr.com ——————————————————————— #3dprinting #3dprinted #3dprinter #3dprint #pavilion #installation #structure #metalstructure #architectural #architect #architectureporn #architecture #architecturelovers #architecturephoto #digitalfabrication #parametricism #architects #architecteye #architectlife #architecturelove #architecturedaily #parametric #parametricarchitecture #parametricdesign #computationaldesign #architecturephotography #designporn #designart #designinspiration #designlovers (at Politecnico di Milano) https://www.instagram.com/p/B87HghAn19t/?igshid=eqolkv9kk3so
#3dprinting#3dprinted#3dprinter#3dprint#pavilion#installation#structure#metalstructure#architectural#architect#architectureporn#architecture#architecturelovers#architecturephoto#digitalfabrication#parametricism#architects#architecteye#architectlife#architecturelove#architecturedaily#parametric#parametricarchitecture#parametricdesign#computationaldesign#architecturephotography#designporn#designart#designinspiration#designlovers
7 notes
·
View notes
Text
Different Types of Graphs and its application (Data Structure)
Graphs, Graph Representation, undirected graph, directed graph, Depth first search, Breadth first search, Spanning tree, Prim's Algorithm, Kruskal’s Algorithm, Shortest path, Dijkstra’s algorithm, Floyd’s Algorithm, Topological ordering on directed acyclic graphs, Topological ordering algorithm, Warshall’s Algorithm, Hamiltonian Paths, Applications of graphs
http://www.knowsh.com Graphs, Graph Representation, undirected graph, directed graph, Depth first search, Breadth first search, Spanning tree, Prim’s Algorithm, Kruskal’s Algorithm, Shortest path, Dijkstra’s algorithm, Floyd’s Algorithm, Topological ordering on directed acyclic graphs, Topological ordering algorithm, Warshall’s Algorithm, Hamiltonian Paths, Applications of graphs…
View On WordPress
#Applications of graphs#Breadth first search#Depth first search#Dijkstra’s algorithm#Directed Graph#Floyd’s Algorithm#Graph Representation#Graphs#Hamiltonian Paths#Kruskal’s Algorithm#Prim&039;s Algorithm#Shortest path#Spanning tree#Topological ordering algorithm#Topological ordering on directed acyclic graphs#undirected graph#Warshall’s Algorithm
0 notes
Text
Intro to Graph Algorithms Solution
1.) (25 points) Perform a depth-first search on each of the following graphs; whenever there’s a choice of multiple vertices, pick the one that is alphabetically first. (a.) Classify each edge in Graph X as a tree edge, forward edge, back edge, or cross edge. Solution: (b.) Give the pre and post number of each vertex in Graph X. (d.) How many topological orderings does this graph have? 3.) (25…
View On WordPress
0 notes
Text
“Weird Formalism” and Surface to Solid
“In “Preface: Weird Formalism” of Contagious Architecture: Computation, Aesthetics, and Space, Parisi calls for a redesign of the traditionally accepted grid. The author “argues for a new digital space that no longer or not fully coincides with Deleuze and Guattari ’s notions of ‘striated’ (metric) and ‘smooth’ (vectorial and projective or topological) space. Striated space is ‘gridded, linear, metric, and optic,’ going on to describe it “as the space of logos, based on the deductive reduction of infinities to discrete unities constituting the building blocks of reason” (x). It is this rigidity of system which is at odds with the infinitely productive dynamics which result from large amounts of quantitative data being processed algorithmically. This is all to say that many notions regarding what Parisi calls digital space are incomputable to humans as these notions are at odds with human systems for rationalization.
Is there any medium through which these new spatio-temporal conceptualizations can be translated into the built world, or is the application of material a constraint which in and of itself negates the infinite nature of computation? Perhaps transforming data-driven design into built realities with which humans can inhabit and interact becomes not an exercise of designing the whole (a building/project as a complete idea), but attempting to locate the right functional part within the endless network of outputs that meets our needs for a built space, an inversion along the lines of “mereotopology of parts that are bigger than wholes” that Parisi describes (xvii)...” (Stewart, 12.4.19)
----
Exercise 5 (surface to solid) was data-driven at its core. Specifically, the act of procuring a surface condition (from topological datasets) as the basis of the design process might be considered an act of engaging with the algorithmic forms described by Parisi. Time (labor), prior knowledge of certain topographies (culture) and mechanical/material limitations (safety considerations and dimensions) of the exercise became the foundations of a system in order to engage with the massive data sets. Without this sort of rationalized, yet semi-open-ended approach, there would have been no clear way to access the massive amounts of data in a meaningful way. So, perhaps cultural constraints can acts as guides for navigating algorithmic architecture.
Source for Discussion:
Luciana Parisi, “Preface: Weird Formalism” in Contagious Architecture: Computation, Aesthetics, and Space, London, MIT Press 2013. x - xvii.
1 note
·
View note