<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Nexus_Agnix]]></title><description><![CDATA[Full stack developer || JAVASCRIPT || Cryptography || Cybersecurity || Web3 || Blockchain]]></description><link>https://blogs.agnibhachakraborty.me</link><generator>RSS for Node</generator><lastBuildDate>Sun, 12 Apr 2026 14:40:48 GMT</lastBuildDate><atom:link href="https://blogs.agnibhachakraborty.me/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Are passwords safe anymore?]]></title><description><![CDATA[Our entire digital identity is completely dependent on a few “words” that we have created a few years ago. These “words” that are the backbone of our online identity are called Passwords, which are easily forgotten, written by us on some paper or not...]]></description><link>https://blogs.agnibhachakraborty.me/are-passwords-safe-anymore</link><guid isPermaLink="true">https://blogs.agnibhachakraborty.me/are-passwords-safe-anymore</guid><category><![CDATA[authen]]></category><category><![CDATA[passwords]]></category><category><![CDATA[cybersecurity]]></category><category><![CDATA[PostQuantumEncryption]]></category><category><![CDATA[Quantum Cryptography]]></category><dc:creator><![CDATA[Agnibha Chakraborty]]></dc:creator><pubDate>Sat, 27 Dec 2025 15:15:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766848026035/777d6b27-e270-4db8-bd4f-acb03a707c86.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Our entire digital identity is completely dependent on a few <strong>“words”</strong> that we have created a few years ago. These “words” that are the backbone of our online identity are called <strong>Passwords</strong>, which are easily forgotten, written by us on some paper or notebook, reused across various platforms and can easily be broken by the hackers. Passwords were not designed for today’s automation and digital age where internet is synonymous to other human essentials like electricity. Passwords were created for a world without the internet unlike today’s world which has billions of users and many hackers.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Passwords will ultimately disappear from the authentication sector and the disappearance will be sudden and inevitable. The authentication’s core principle will shift from “Something you know” to “Something you are” or “Something you own”.</div>
</div>

<h2 id="heading-history-of-digital-authentication">History of Digital Authentication</h2>
<p>The history of digital authentication dates back to several decades when the scenario of the computer and the overall information technology was completely different from today’s age.</p>
<ol>
<li><h3 id="heading-1960s-birth-of-passwords">1960s → Birth of Passwords</h3>
<p> Passwords first came into picture during 1960s when a computer was having the size of a modern day refrigerator. The numbers of computers were also very less and it was only used by the government offices and some other sectors, unlike today where every commoner has access to it. Each system was shared by 10-15 users and thus every user had their own passwords. These passwords were often written on papers as there was no threat of stealing. Internet was a theoretical concept at that time far from the practical world, hence hackers were also not present.</p>
<p> Thus, passwords worked simply because threat didn’t exist.</p>
</li>
<li><h3 id="heading-1990-to-2005-the-era-of-internet">1990 to 2005 → The Era of Internet</h3>
<p> This was the era when internet was fairly popularizing. People started using internet and it penetrated into every society. Simple and common passwords were used at that time like <code>12345</code>, <code>iloveyou</code>, birthdays, names were used. This was the time when there were no patterns fixed for passwords. This was due to the overall <strong>low awareness</strong> which perhaps caused <strong>low risk perception</strong>.</p>
<p> Thus, users choose convenience over security.</p>
</li>
<li><h3 id="heading-2005-to-2015-the-complexity-era">2005 to 2015 → The Complexity Era</h3>
<p> It was the time where the footprint of the internet rose to a global level. Internet came into individual smart phones. Dynamic, responsive and interactive applications like Instagram, Facebook came into the picture. Thus the companies demanded complexities in the passwords. Applications demanded<br /> uppercase, lowercase, special characters, numbers and a fixed length (generally 8) for the passwords, for example <code>Johndoe#1677@</code>. These complexity rules improved the policy but not the human behavior. Users responded to these complex password policies by repeating patterns thus, not increasing entropy.</p>
</li>
<li><h3 id="heading-2015-to-2020-otp-amp-mfa-era">2015 to 2020 → OTP &amp; MFA Era</h3>
<p> It was the time when increased security became imperative as hackers/attackers were most active. Thousands of data breaches occurred between 2010 to 2020 with millions of records exposed annually.</p>
 <div data-node-type="callout">
 <div data-node-type="callout-emoji">💡</div>
 <div data-node-type="callout-text">Data breaches impacted major companies like <strong>Yahoo (3 billion accounts, 2013/14)</strong>, <strong>Equifax (143M, 2017)</strong>, and <strong>Marriott/Starwood (500M+, 2018)</strong>, involving hacking, ransomware, and phishing, highlighting trends in massive data theft during that decade.</div>
 </div>

<p> It was the time when companies pivoted to OTP (One Time Passwords) based authentication that is usually sent via verified email address or phone number and expires after a certain time. Though the security improved, human errors remained. Victims of social engineering shared their OTPs to scammers and attackers. This led to the creating MFA (multi-factor authentication) as a second layer of authentication. This includes either verifying OTPs through email, messages and push notifications or using a dedicated Authenticator application to synchronize OTPs with the user and the platform.</p>
</li>
<li><h3 id="heading-2020-to-present-biometric-amp-hardware">2020 to Present → Biometric &amp; Hardware</h3>
<p> It was the time when all the major companies shifted their core authentication principle from “Something you know” to “Something you are” or “Something you own”. This led to the formation of biometrics based authentication that included fingerprints, face IDs and retina scans etc.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766554811765/b95a83be-ecad-4896-801d-1feea9a641ec.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h2 id="heading-why-are-passwords-failing">Why are Passwords failing?</h2>
<ol>
<li><h3 id="heading-human-behavior">Human Behavior</h3>
<p> If we really want to understand why passwords are failing we have to understand this quote:</p>
<blockquote>
<p><strong>The real problem is not whether machines think but that men don't.</strong></p>
<p>— B.F. Skinner</p>
</blockquote>
<p> The real problem was never with password systems or the authentication mechanism, it was always the users using it. Multiple cybersecurity reports consistently find that 90% to 95% <strong>of data breaches involve a human element.</strong> In the present digital era, where nearly every major company, government department and private sectors work online, users have 100+ online accounts. Password use is very common, for example, user having same password in Netflix, Instagram, Email and even Bank Application. Thus, if someday, Netflix’s data breaches everything else also gets compromised. Users cannot be blamed because creating and remembering unique passwords for every platform is not only unrealistic but also impractical.</p>
<p> Thus, the reason behind failing of passwords is simply the fact that humans become victims of social engineering confirming the fact is <strong>Humans are the weakest link, not the system.</strong></p>
</li>
<li><h3 id="heading-password-reuse-amp-single-point-of-failure">Password Reuse &amp; Single Point of Failure</h3>
<p> As discussed in the previous point, users tend to use same passwords for multiple platforms, therefore, breach or compromise of one platform leads to the compromise of other platforms too. This entire scenario creates a single point of failure. To solve this issue, password managers were created, but they too come with some complexity. A single master password is required to authenticate the Password manager itself thus, keeping the single point of failure still prevalent, just shifting from one to another.</p>
<p> Therefore, the centralization of password manager increases the risk of compromise.</p>
</li>
<li><h3 id="heading-weak-password-choices">Weak Password Choices</h3>
<p> Users have a tendency to choose convenient passwords which turns to be very weak and predictable. Some common patterns of weak passwords are: Birthdays, Phone numbers, some global general passwords like “admin123”, “password123” and the predictable variations of all these.</p>
<p> Thus, password cracking becomes very easy as hackers already know these patterns and their variations.</p>
</li>
<li><h3 id="heading-ai-based-password-cracking">AI-Based Password Cracking</h3>
<p> The advent of AI has impacted almost every sector in the IT industry. Hackers and attackers also use significant amount of AI for their job, like brute-forcing passwords. The strength of AI based password cracking tools is that it does not brute-force blindly. It usually does pattern based brute-force where it learns patterns, behaviors and habits and make targeted and calculative guesses according to it. Passwords of 8 to 10 characters that was considered safe length can be cracked in seconds as powerful GPUs removes the technical limits.</p>
<p> Thus, with the current boom of AI and GPUs, password length alone no longer guarantees safety.</p>
</li>
<li><h3 id="heading-phishing-the-biggest-killer">Phishing: The Biggest Killer</h3>
<p> Phishing, is and always will be the biggest killer in the game of authentication. Hackers have achieved enough expertise that fake websites look more real and load faster than the original ones. Users fall prey to this and voluntarily shares passwords. Even 20 to 25 words paraphrases fail if gets phished.</p>
<p> Thus, the strongest password becomes useless if handed over.</p>
</li>
<li><h3 id="heading-data-breaches">Data Breaches</h3>
<p> The industry has faced several data breaches over the last few years. Big shot companies like Domino’s, Zomato, BigBasket, Air India.</p>
 <div data-node-type="callout">
 <div data-node-type="callout-emoji">💡</div>
 <div data-node-type="callout-text"><strong>Domino’s</strong> (2021) saw 180 million order records leaked, <strong>Air India</strong> (2021) suffered a breach of 4.5 million passenger records via its service provider SITA, <strong>Zomato</strong> (2017) had 17 million user emails and hashed passwords stolen, and <strong>BigBasket</strong> (2020) experienced a leak of over 20 million user profiles including phone numbers and addresses.</div>
 </div>

<p> This data proves the fact that even large companies fail to protect the passwords of their users.</p>
</li>
<li><h3 id="heading-quantum-computing-threat">Quantum Computing Threat</h3>
<p> Quantum Computing is not merely a theory anymore, it is practically present and it poses an inevitable threat to cybersecurity. The threat may not be visible to us but it is real and urgent primarily due to the <em>"Harvest Now, Decrypt Later"</em> strategy where adversaries collect encrypted data today to decrypt it once quantum power matures.</p>
 <div data-node-type="callout">
 <div data-node-type="callout-emoji">💡</div>
 <div data-node-type="callout-text">In 2025, the landscape of quantum computing has reached a practical turning point: <strong>Google</strong>’s 105-qubit <strong>Willow</strong> chip performed a benchmark calculation in just 5 minutes that would take a classical supercomputer 10 septillion years while <strong>IBM</strong> is currently operating its 1,121-qubit <strong>Condor</strong> processor while executing a roadmap to deliver the first large-scale, fault-tolerant system by 2029 and <strong>Microsoft</strong>, in collaboration with Atom Computing, has successfully demonstrated and offered commercial access to 28 logical qubits, achieving the highest number of entangled logical qubits on record.</div>
 </div>

<p> This data proves the fact that the current encryption that stands on the shoulder of RSA (<strong>Rivest-Shamir-Adleman)</strong> and ECC <strong>(Elliptic Curve Cryptography)</strong> will ultimately break and therefore, text based passwords will become a <em>cake-walk</em> to break by hackers.</p>
<p> Hence, passwords will or maybe already become <strong>technologically obsolete</strong>, not just weak.</p>
</li>
<li><h3 id="heading-industry-shift">Industry Shift</h3>
<p> In 2025, industry giants like Google, Apple and Microsoft have officially designated passwords as <em>"vulnerable transitional technology"</em> transitioning to a <em>"password-less by default"</em> model to combat escalating cyber threats.</p>
 <div data-node-type="callout">
 <div data-node-type="callout-emoji">💡</div>
 <div data-node-type="callout-text">The vulnerability of traditional security was exposed by a historic data dump known as the <strong>"Mother of All Breaches" </strong>that leaked over 16 billion unique credentials from major companies like Google, Apple, and Microsoft, highlighting why Microsoft now blocks roughly 7,000 password attacks every second and why the persistence of "123456" as the most popular password across 7.6 million leaked accounts makes passwords a legacy risk.</div>
 </div>

<p> Thus, it can be concluded that industry leaders already know passwords won’t survive.</p>
</li>
</ol>
<h2 id="heading-authentication-without-passwords">Authentication without Passwords</h2>
<ol>
<li><h3 id="heading-otp-amp-mfa">OTP &amp; MFA</h3>
<p> OTP (One time passwords) and MFAs (Multi factor authentication) are already discussed in the previous part of the article. OTPs are verified after verifying passwords hence acts as a second factor of authentication. OTPs can be shared to the user via verified phone numbers, emails or by using special type of applications called Authenticator applications.</p>
<p> Pros :</p>
<ul>
<li>It adds an extra layer of protection.</li>
</ul>
</li>
</ol>
<p>    Limitations:</p>
<ul>
<li><p>Users become victim of <strong>Social Engineering</strong> which results in sharing their OTPs.</p>
</li>
<li><p><strong>SIM swapping</strong> is another common problem due to which users loose their OTPs.</p>
</li>
<li><p>Email Compromise is another limitation to OTPs.</p>
</li>
</ul>
<p>    Thus, OTPs and MFA maybe better than passwords alone, but the security is entirely dependent on the channel through which the OTP is shared (like email, phone etc.), hence not future proof.</p>
<ol start="2">
<li><h3 id="heading-passkeys">Passkeys</h3>
<p> Passkeys are a secure, password-less authentication method based on industry standards from the <em>FIDO Alliance</em>. This replace traditional passwords with cryptographic key pairs that are split between our device and the service we are accessing. It consist of two keys that are cryptographically split as:</p>
<ul>
<li><p><strong>User Device (Private Key):</strong> When you create a passkey, your device generates a unique <strong>private key</strong> that is stored securely on your hardware (such as a phone, laptop, or physical security key). This key never leaves your device and is never shared with anyone.</p>
</li>
<li><p><strong>Server (Public Key):</strong> A corresponding <strong>public key</strong> is sent to the website or app and stored on their server. Unlike a password, this public key is not secret even if the server is hacked, an attacker cannot use the public key to log in to your account without your private key.</p>
</li>
</ul>
</li>
</ol>
<p>    Pros:</p>
<ul>
<li><p>No typing</p>
</li>
<li><p>No phishing</p>
</li>
<li><p>Brute-Force not possible</p>
</li>
<li><p>Fast &amp; cross-platform</p>
</li>
</ul>
<p>    Limitations:</p>
<ul>
<li><p>Device loss may create a huge problem</p>
</li>
<li><p>Sync issues between private and public keys</p>
</li>
<li><p>Poor usability for non-tech users</p>
</li>
<li><p>Limited adoption</p>
</li>
</ul>
<p>    Thus, this mechanism is strong but it is also heavily ecosystem dependent.</p>
<ol start="3">
<li><h3 id="heading-biometrics">Biometrics</h3>
<p> This simply means authentication using Fingerprint, Face, Retina etc. It is an effective authentication mechanism fingerprints, facial structures, retina are unique for each individuals.</p>
<p> Pros:</p>
<ul>
<li><p>Unique per user</p>
</li>
<li><p>Easy and simple authentication</p>
</li>
<li><p>No memory required</p>
</li>
</ul>
</li>
</ol>
<p>    Limitations/Risks:</p>
<ul>
<li><p>With the rise of Deepfakes, creating false biometrics is easy</p>
</li>
<li><p>3D-printed fingerprints</p>
</li>
<li><p>Immutable that is cannot be changed if leaked/compromised (unlike passwords that can be changed easily).</p>
</li>
</ul>
<p>    Thus, biometrics based authentication is secured but irreversible if compromised.</p>
<ol start="4">
<li><h3 id="heading-hardware-security-keys">Hardware Security Keys</h3>
<p> Hardware security keys are <strong>FIDO2-compliant</strong> physical devices that connect via USB or NFC to provide phishing-resistant authentication through a cryptographic <em>challenge-response process</em>. When a login is initiated, the server sends a challenge that the key signs with its internal private key often requiring a physical touch or PIN to confirm user presence before returning a verifiable signature to grant access.</p>
<p> Pros:</p>
<ul>
<li><p>Cannot be remotely hacked</p>
</li>
<li><p>No phishing</p>
</li>
<li><p>No brute-force</p>
</li>
</ul>
</li>
</ol>
<p>    Cons:</p>
<ul>
<li><p>The device can be lost</p>
</li>
<li><p>Expensive (₹1500–₹5000)</p>
</li>
<li><p>Poor usability</p>
</li>
<li><p>Backup complexity</p>
</li>
</ul>
<p>    Thus, hardware security keys are most secure today, but not mass-adoptable.</p>
<h2 id="heading-future-of-authentication">Future of Authentication</h2>
<p>The future of authentication will be completely different from the present scenario. Today’s authentication mechanisms are based on a <em>session-based approach,</em> i.e., the user gets authenticated and gets a session for a certain amount of amount of time. In future, it would be risky to follow this session based approach and the following mechanisms may be seen:</p>
<ol>
<li><h3 id="heading-behavioral-biometrics">Behavioral Biometrics</h3>
<p> Behavioral biometrics is a type of authentication principle that records various signals like your unique, consistent digital body language such as typing speed, mouse movements, gestures, speech patterns and facial expressions and creates an authentication system that continuously verifies your identity in the background.</p>
<p> Thus, in this mechanism you become your own password.</p>
</li>
<li><h3 id="heading-continuous-authentication">Continuous Authentication</h3>
<p> Continuous authentication is a mechanism where authentication doesn’t stop after login. System constantly verifies the behavior of the user (as mentioned in the above point), location of the device and integrity of the device.</p>
<p> Therefore, according to this principle, <em>Trust becomes dynamic, not static</em>.</p>
</li>
<li><h3 id="heading-post-quantum-cryptography">Post-Quantum Cryptography</h3>
<p> As we have already discussed in this article that Quantum Computers are not mere a theory anymore. RSA and ECC will eventually and ultimately break. Therefore, new quantum-resistant algorithms like <code>Kyber</code>, <code>Dilithum</code>, <code>Spincs</code> etc. are required to be used in the encryption systems. Future authentication must be quantum-safe at its foundation.</p>
</li>
<li><h3 id="heading-ambient-authentication">Ambient Authentication</h3>
<p> It can be considered as <em>invisible-login systems</em>. These systems uses GPS, Wi-Fi, Motion sensors, Bluetooth and Device proximity to authenticate the user. For example, Payments verified via physical closeness to POS device. As a result, Authentication becomes frictionless and passive.</p>
</li>
<li><h3 id="heading-localization-of-identity">Localization of Identity</h3>
<p> Localization of Identity is an utmost requirement in the authentication systems. A core shift in the mechanism is essential where the identity stays with the User, Device and the Hardware and not stored on remote servers. This is essential for security reasons. If we consider the India specific insights alone we can observe that India has a massive digital population, which is probably worlds largest. India has systems like UPI, Aadhar and a <em>smartphone-first</em> user base. Therefore, India needs password-less systems more than anyone.</p>
</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Passwords are not merely weak anymore they are fundamentally outdated. They were designed for a simpler digital era, not for a world driven by automation, AI-powered attacks, massive data breaches, and emerging quantum capabilities. Over time, increasing password complexity has only added friction for users without meaningfully improving security. Longer passwords, special characters, and frequent rotations fail to address the core issue: authentication systems that rely heavily on human memory and behavior are bound to break.</p>
<p>Password-less authentication offers a clear improvement by reducing the very factors that make passwords fail. It minimizes human error, significantly lowers the impact of phishing attacks, and eliminates large-scale password reuse across platforms. By moving away from static secrets, these systems reduce the blast radius of breaches and shift security away from guessable information.</p>
<p>The future of authentication lies in mechanisms that align better with modern threat models. Behavioral signals, hardware-backed identity, continuous verification, and cryptographic systems designed to withstand post-quantum attacks will define the next generation of digital trust. Authentication will no longer be a one-time checkpoint but an ongoing process that adapts to context, behavior, and risk in real time.</p>
<p>At its core, authentication is undergoing a fundamental transition — from <em>what you know</em> to <em>who you are</em> and <em>what you own</em>. In a post-AI, post-quantum world, reliably verifying identity without sacrificing privacy or usability may become the hardest challenge in cybersecurity. Passwords solved yesterday’s problems. Tomorrow’s world demands something far stronger.</p>
]]></content:encoded></item><item><title><![CDATA[Working of Unified Payments Interface]]></title><description><![CDATA[What is UPI?
UPI stands for Unified Payments Interface is a digital payment infrastructure which used by more than 66% of the Indian population. It's known to serve 491 million individuals as of July 2025 and accounts for 85% of all digital transacti...]]></description><link>https://blogs.agnibhachakraborty.me/upiworking</link><guid isPermaLink="true">https://blogs.agnibhachakraborty.me/upiworking</guid><category><![CDATA[UPI]]></category><category><![CDATA[fintech]]></category><category><![CDATA[#piyushgarag]]></category><category><![CDATA[backend]]></category><category><![CDATA[System Architecture]]></category><dc:creator><![CDATA[Agnibha Chakraborty]]></dc:creator><pubDate>Sun, 07 Dec 2025 06:18:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765088200573/b70aa183-3630-4d6f-ab92-744ce75f1812.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-what-is-upi">What is UPI?</h3>
<p>UPI stands for Unified Payments Interface is a digital payment infrastructure which used by more than 66% of the Indian population. It's known to serve 491 million individuals as of July 2025 and accounts for 85% of all digital transactions in India. It is an epitome of technological advancement in the finance sector which allows us to transfer money from entity to another within a few seconds. NPCI (National Payments Corporation of India) is the umbrella organisation which is responsible for all the retail payment settlements in India. NPCI was established in the year 2008 under the initiative of the RBI (Reserve Bank of India). NPCI develops and maintains the UPI infrastructure.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">As of July 2025, 491 million individuals are using UPI encompassing 85% of all the digital transactions in India</div>
</div>

<h3 id="heading-how-digital-payments-were-done-before-upi">How digital payments were done before UPI?</h3>
<p>Digital payments were also relevant in the pre-UPI era in the form of <strong>NEFT</strong> (National Electronic Funds Transfer) , <strong>RTGS</strong> (Real-Time Gross Settlement) and <strong>IMPS</strong> (Immediate Payment Service). These services are different from each other as they vary in different parameters like payment speed, service availability, minimum and maximum payment limits. For instance, IMPS payments were done instantly, RTGS payments took minutes to complete while NEFT took hours to complete. The entire system was centrally controlled and monitored by the RBI.</p>
<p>For these type of payments we need the following details of both the sender and the receiver :</p>
<ul>
<li><p>Account Number</p>
</li>
<li><p>Bank Name</p>
</li>
<li><p>IFSC code</p>
</li>
</ul>
<p>For example, a user (sender) who is having an account in ICICI Bank will use its payment portal to send money to the user (receiver) who is having an account in HDFC bank. In this process both parties will use their Account Number, Bank Name and IFCS code for identifying each other and helping the system to authenticate them.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757000021686/befe564b-e226-4a88-8869-af07b39a4499.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-how-upi-entered-the-game-and-changed-it-completely">How UPI entered the game and changed it completely :</h3>
<p>After the launch of UPI on April 2016, the online money transfer scenario was changed completely.</p>
<p>NPCI acts as the central controlling entity of the entire UPI infrastructure. It can be regarded as the “brain” of the UPI’s infrastructure. It interacts with the secured private APIs whose access are limited only to the trusted banks (e.g. SBI, ICICI, HDFC etc). Those APIs are cannot be accessed by the public systems. This internal NPCI orchestrates the backbone of the digital payment, it defines the flows how the amount will be deducted from the sender’s account and added to the receiver’s account ensuring a transparent transaction system in real time.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757422342312/17361629-d1e5-4452-b8c8-281272cf185b.png" alt class="image--center mx-auto" /></p>
<p>Some entities which are involved in this entire process of UPI are :</p>
<ol>
<li><p><strong>CPSPs (Customer Payment Service Provider)</strong> : Products like Google Pay and PhonePe which are used by the end user (sender as well as receiver) to interact and utilise the UPI system. These PSPs do not have direct access to the NPCI’s internal system through NPCI’s APIs. PSPs have their interaction with the banking systems which in turn communicates with the internal system of the NPCI.</p>
</li>
<li><p><strong>Banking systems</strong> : These are the individual technological entities handled by the banks independently. For example banks like SBI, ICICI, and Axis have their individual systems. These systems acts as a middleware between the CPSPs (like Google Pay and PhonePe) and the internal system of the NPCI. These banking systems are the only entity in the entire UPI pipeline which can access the highly secure private APIs of the NPCI’s internal system which handles the core backbone functions of the UPI pipeline.</p>
</li>
<li><p>VPA (Virtual Payment Address) : VPAs servers as the unique identifier of each UPI user. It can be considered as a sole substitute of the informations like Bank name, branch code, IFSC code etc which was used in the previous system to identify the sender and the receiver. The VPA is generally in the form username@bank_code for example johndoe@okasbi.</p>
</li>
</ol>
<p>Because of this simpler function and usage of UPI it has been widely adapted and used even among the rural population of India.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Approximately 38% of the rural and semi-urban population in India uses UPI, according to a late 2024 <a target="_self" href="https://www.google.com/search?cs=1&amp;sca_esv=e7bf22627bcd1c5c&amp;sxsrf=AE3TifPLnL3R_Ui20e5h_NBUG2ukbz_3aA%3A1757424361742&amp;q=EY+report&amp;sa=X&amp;ved=2ahUKEwjz4ZT248uPAxU9xDgGHZCSAeIQxccNegQIAhAC&amp;mstk=AUtExfAyVx27Ce_Hh3pJHpPjx-Lol3E5VKX32ZKzvqhhMbD7TdyXJS16z8snOv2_WwKSmqh3vt7QnfYT1tWISg4yD3p4Yidyq9_QmoEz7zLfcGnH7VqXqUNQbVbT-mCfYAnTARGvBM0s0Kd3ErZw2u28UYFsDk4xNMoFY0Eaa9oDsIGgfJLS4jRqXvoGX7dezWt6_fPXBCdxCTW0NZCn6pY7q9WoFkFmoqHanEubYFTlgp37H2sNVOcPw3c_hyBt8j_kc4yd8Lkbk8K7NwTbfq903Dr3K7wWACjPMblArrJt-XJxvw&amp;csui=3">EY report.</a></div>
</div>

<h3 id="heading-procehttpswwwgooglecomsearchcs1ampscaesve7bf22627bcd1c5campsxsrfae3tifplnl3rui20e5hnbug2ukbz3aa3a1757424361742ampqeyreportampsaxampved2ahukewjz4zt248upaxu9xdgghzcsaeiqxccnegqiahacampmstkautexfayvx27cehh3pjhppjx-lol3e5vkx32zkzvqhhmbd7tdyxjs16z8snov2wwksmqh3vt7qnfyt1twisg4yd3p4yidyq9qmoez7zlfcgnh7vqxqunqbvbt-mcfyantargvbm0s0kd3erzw2u28uyfsdk4xnmofy0eaa9odsiggfjls4jrqxvogx7dezwt6fpxbcdxctw0nzcn6py7q9wofkfmoqhaneubyftlgp37h2snvocpw3chybt8jkc4yd8lkbk8k7nwtbfq903dr3k7wwacjpmblarrjt-xjxvwampcsui3ss-in-which-the-upi-pipeline-works"><a target="_blank" href="https://www.google.com/search?cs=1&amp;sca_esv=e7bf22627bcd1c5c&amp;sxsrf=AE3TifPLnL3R_Ui20e5h_NBUG2ukbz_3aA%3A1757424361742&amp;q=EY+report&amp;sa=X&amp;ved=2ahUKEwjz4ZT248uPAxU9xDgGHZCSAeIQxccNegQIAhAC&amp;mstk=AUtExfAyVx27Ce_Hh3pJHpPjx-Lol3E5VKX32ZKzvqhhMbD7TdyXJS16z8snOv2_WwKSmqh3vt7QnfYT1tWISg4yD3p4Yidyq9_QmoEz7zLfcGnH7VqXqUNQbVbT-mCfYAnTARGvBM0s0Kd3ErZw2u28UYFsDk4xNMoFY0Eaa9oDsIGgfJLS4jRqXvoGX7dezWt6_fPXBCdxCTW0NZCn6pY7q9WoFkFmoqHanEubYFTlgp37H2sNVOcPw3c_hyBt8j_kc4yd8Lkbk8K7NwTbfq903Dr3K7wWACjPMblArrJt-XJxvw&amp;csui=3">Proce</a>ss in which the UPI pipeline works :</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757423906638/dae238e0-e670-4c7e-98f3-4f437858c4d8.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>User request for payment through the CPSPs (like Google Pay, PhonePe etc) by choosing the VPA of the receiver. This VPA selection either can be done directly or by scanning receiver’s QR Code or entering receiver’s number (which ultimately translates to the VPA of the receiver).</p>
</li>
<li><p>The CPSPs communicates to the Bank system with the the sender’s payment request. The Bank System verifies whether there is sufficient balance in user’s account to transfer the requested amount and passes the request to the NPCI internal system.</p>
</li>
<li><p>NPCI’s internal system verifies with bank server about sender’s balance . It also verifies whether the VPA of the receiver is valid or not.</p>
</li>
<li><p>After successful verification of all the mentioned information, finally the NPCI requests for the user approval/authentication to proceed forward. This is the segment where the end user (sender) enters his UPI PIN into the PCSP.</p>
</li>
<li><p>After the completion of the authentication process the NPCI’s internal system does all the internal backend stuff i.e., deducting money from the sender’s account and adding money to the receiver’s account.</p>
</li>
</ol>
<p>If any of the checks and validations in the above steps fails, the transaction automatically fails and rolls back. If every check and validation passes, the transaction also completes successfully.</p>
<p>I have tried to explain the process of sending money, similar pipeline follows in the receive payment function also.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">There is very less resource openly available explaining the architecture of the NPCI’s internal system, hence that could not be covered in this blog.</div>
</div>

<h3 id="heading-acknowledgement">Acknowledgement :</h3>
<p>Thanks to Piyush Garg sir to explain the concept behind the pipeline of UPI, this article is possible only because of this efforts. I tried my best to replicate and document what I understood and grasped from this video.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/fqySz1Me2pI">https://youtu.be/fqySz1Me2pI</a></div>
]]></content:encoded></item><item><title><![CDATA[How the browser works?]]></title><description><![CDATA[What is a Browser? Why is it important to know about browser?
When we think of a browser, lets say Chrome, Brave, Safari, we consider it just as a piece of software. This undermines the browser’s real capabilities and diminishes the amount of attenti...]]></description><link>https://blogs.agnibhachakraborty.me/how-the-browser-works</link><guid isPermaLink="true">https://blogs.agnibhachakraborty.me/how-the-browser-works</guid><category><![CDATA[Browsers]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[HTML5]]></category><category><![CDATA[Developer]]></category><dc:creator><![CDATA[Agnibha Chakraborty]]></dc:creator><pubDate>Wed, 21 May 2025 18:01:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/QgldjOQbp7k/upload/cf7572c1b8b19345f7e2134ad9e76676.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-what-is-a-browser-why-is-it-important-to-know-about-browser">What is a Browser? Why is it important to know about browser?</h3>
<p>When we think of a browser, lets say Chrome, Brave, Safari, we consider it just as a piece of software. This undermines the browser’s real capabilities and diminishes the amount of attention that it deserves. It won’t be wrong to call the browser as a near-OS as it has so much capabilities like controlling network, enabling interaction, displaying and storing data, managing its own timer etc. Thus , browser has lot more to discuss than we think, which we will discuss in this article.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747815298956/f8c15387-e10b-4a39-99f3-027e004990fe.png" alt class="image--center mx-auto" /></p>
<p>The functional architecture of a browser can be divided in the following parts :</p>
<ol>
<li><p><strong>Data</strong> : Data is stored in the browser in the forms of local storage, cookies, session storage etc. Nowadays even the entire scripts (Javascript codes) can be stored an executed from the browser data. Memory that is responsible for the execution of programs in the browser is also a part of this section.</p>
</li>
<li><p><strong>User Interface :</strong> It is the part of the browser which is directly exposed to the end-user for display of web pages and interaction with those pages.</p>
</li>
<li><p><strong>Browser Engine :</strong> This is the most interesting and the critical part of the browser. Browser Engine is responsible for all its working. It can be further divided in the following :</p>
<ol>
<li><p>Rendering Engine : It is part responsible for rendering the HTML, CSS, develops the Document Object Model (DOM), paints the canvas.</p>
</li>
<li><p>Javascript Engine : Javascript engine is the part which deals with the javascript execution, nowadays typically v8 engine is used. It works exactly same as your Node, Dino, Bun and other runtime environments.</p>
</li>
<li><p>Network Engine : It deals with all the network related things of the web application like sending requests, receiving responses, understanding status code, dealing with network protocols like HTTPS, Web Sockets connections.</p>
</li>
<li><p>Timer Engine : This is the part which deals with the timing of the web applications. The <code>setTimeout</code>, <code>setInterval</code> functions that we use in our javascript are not directly executed through the javascript engine, instead they are implemented using web apis from the browser’s timer engine. In other runtime environments like Node and Dino, similar concept is used to maintain the timing.</p>
</li>
</ol>
</li>
</ol>
<p>In this article, we will focus on the Rendering Engine of the browser as we are understanding in context of web pages.</p>
<h3 id="heading-what-is-html-actually">What is HTML actually?</h3>
<p>Most of us think HTML (Hypertext Markup language) is simply writing <code>&lt;h1&gt;</code> tags or <code>&lt;p&gt;</code> tags but it is not that simple. Ever wondered how a simple programming language (it is still a debate whether HTML is a programming language or not) which has no access to the memory, no access to the LEDs of our screens, can render the entire complex webpages containing complex animations, gradients, images, videos etc on our screen.</p>
<p>HTML which perhaps creates the skeleton of the web pages, is actually a parser that uses C++ under the hood to render all the webpages in our web browsers. We are writing HTML but it is not as it is going to browser in the HTML format, rather it gets converted into C++ and goes into browser in the C++ format. This can be simply understand by writing a simple code for getting all the <code>&lt;h2&gt;</code> tags in the console of any wikipedia page:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747820024892/9e1eeac7-0bd9-4301-8d91-39f8e201b1fc.png" alt class="image--center mx-auto" /></p>
<p>We can clearly see, we are getting a NodeList [] in return. But we never write any NodeList right, we write HTML, which gets converted to C++.</p>
<p>It would not be wrong to say that whatever tags we write in our .html files is actually not the reason for the rendering of web pages in our browsers, there is a lot more to understand about it , which we will discuss in this article .</p>
<h3 id="heading-how-the-rendering-engine-actually-works">How the Rendering Engine actually works?</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747843849215/a87f558c-10f6-4c46-b170-6339751e720e.png" alt class="image--center mx-auto" /></p>
<p>The rendering engine, as discussed earlier is responsible for the painting of the web pages on our browser screens. It is Rendering Engine’s job to understand the HTML and generate the Document Object Model.</p>
<p>The two ultimate jobs of the Rendering Engine are - <strong>Displaying the content</strong> and <strong>Interaction with the content</strong>.</p>
<p>This process occurs in the following steps:</p>
<ul>
<li><p>We write our .html files, which is considered to be <strong>DOCUMENT</strong> that will be rendered by the browser.</p>
</li>
<li><p>The initial step is loading the file into the browser. This file could be anywhere in the internet , in some server, in some database or even in the local machine. Irrespective of where this file is located, the browser always treats as a resource from “another machine” , even if it is present on the local machine. That is the reason why I have mentioned as a near-OS.</p>
</li>
<li><p>The file which is loaded is in the form of raw bytes, i.e., 0s and 1s in which our computer stores and understands it.</p>
</li>
<li><p>The next step is character encoding , i.e. converting this raw data into characters of languages like english, hindi or japanese. In order to do character encoding, standards like UTF-8 (which is backward compatible with ascii) are used .</p>
</li>
<li><p>Next step is tokenisation, i.e. taking language specific tokens from the characters. In case of programming languages these tokens are generally keywords like <code>if</code>, <code>else</code>, <code>for</code>, <code>while</code> . In the case of HTML these tokens are generally keywords like <code>h1</code>, <code>p</code>, <code>html</code>, <code>body</code>, <code>head</code>, <code>style</code> etc.</p>
</li>
<li><p>Now, these tokens are converted into <strong>OBJECTS</strong> in these formats as mentioned below :</p>
<pre><code class="lang-javascript">  {
      <span class="hljs-attr">tag</span> : h1,
      <span class="hljs-attr">title</span> : something, 
      <span class="hljs-attr">value</span> : something
  },
  {
      <span class="hljs-attr">tag</span> : head,
      <span class="hljs-attr">title</span> : something, 
      <span class="hljs-attr">value</span> : something
  }
  <span class="hljs-comment">// similarly for all tags like body, html, styles etc.</span>
</code></pre>
</li>
<li><p>Just these objects in the gibberish format will not work. In the next step, they are arranged with respected to their relations. Here relations can be in the format like “Siblings”, “Children” and so on. This process can be called as <strong>MODELLING</strong> the <strong>OBJECTS</strong> that are created from the <strong>DOCUMENTS</strong>, thus bringing up the term <strong>Document Object Model</strong> or <strong>DOM</strong>.</p>
</li>
<li><p>Exact same steps are followed for the CSS files also from loading raw bytes , encoding to characters, tokenisation and arranging these tokens with respect to relations and ultimately the CSS object model or the CSSOM is created.</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">The rendering engine works on the creation of DOM (Document Object Model). Whenever it is encountered with a <code>&lt;link:css&gt;</code> tag, it starts working simultaneously on the creation of CSSOM (CSS Object Model). Thus, we can consider that the browser engine works both on the DOM and CSSOM in a parallel order.</div>
</div>

<p>Now, lets take a pause and understand where we are. We have created the DOM and the CSSOM. Note that until this time, DOM and the CSSOM have not aware of each other. Here comes the concept of <strong>Render Tree</strong> comes.</p>
<p>When the DOM and the CSSOM is created, the browser engine starts its work with its mathematical capabilities. As the CSS deals a with a lot of screen sizes in pixels, percentages, rems and ems, mathematics to display this is also involved. Then the Browser starts rendering the elements of the DOM on the screen. This is process is called <strong>painting</strong>.</p>
<h3 id="heading-what-is-the-role-of-javascript-engine-in-the-process-of-rendering">What is the role of Javascript engine in the process of rendering?</h3>
<p>The Javascript engine plays an importantly role in the Painting process, as the <code>&lt;script&gt;</code> tag has the power to manipulate the entire DOM (as well as the CSSOM).</p>
<ul>
<li><p>While doing the rendering process, the Browser Engine works to create the DOM. Whenever the Browser Engine encounters a <code>&lt;script&gt;</code> tag, it haults its process of creating DOM and first executes the Javascript code. DOM creating is halted to execute the Javascript code because the Javascript may entirely change the structure of the DOM.</p>
</li>
<li><p>Sometimes the javascript code is asynchronous written with the <code>async</code> keyword. In some frameworks like React.js and Next.js, this process is called hydration where the client-side JavaScript takes over the server-rendered HTML to make it interactive. Thus, it has the capability to modify the DOM.</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">The interaction of Javascript and the CSSOM is still under studies to be understood completely. According to some experts in the field like Hitesh Choudhary who are researching on this, the <strong>Javascript execution will be halted until the CSSOM is completely ready</strong>. But this concept is under academic research and goes through a lot of debate.</div>
</div>

<h3 id="heading-ia"> </h3>
<p>Conclusion</p>
<p>Understanding the architecture and the working of Browser is utterly essential for a developer. It helps us to develop better applications. In this article, we have discussed the architecture of the browser, the different engines working internally in the browser, how the entire “painting” process of the webpage works and why the browser can be considered as a near-OS.</p>
<p>Watch Hitesh Choudhary sir’s video to understand this concept in a better</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/5rLFYtXHo9s">https://youtu.be/5rLFYtXHo9s</a></div>
]]></content:encoded></item><item><title><![CDATA[Decoding AI Jargons with Chai]]></title><description><![CDATA[Aj ki class ke baad, aap ChatGPT ko kabhi uss nazariye se dekh hi nahi paoge ~ Piyush Garg Sir

This is what our mentor told us in the beginning of our first class of Gen AI cohort , and this is absolutely true. This lecture was my first face off wit...]]></description><link>https://blogs.agnibhachakraborty.me/decoding-ai-jargons-with-chai</link><guid isPermaLink="true">https://blogs.agnibhachakraborty.me/decoding-ai-jargons-with-chai</guid><category><![CDATA[ChaiCode]]></category><category><![CDATA[ChaiCohort]]></category><dc:creator><![CDATA[Agnibha Chakraborty]]></dc:creator><pubDate>Tue, 08 Apr 2025 11:19:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/f0JGorLOkw0/upload/9cda009d0495acc7777a1d95626b48c1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Aj ki class ke baad, aap ChatGPT ko kabhi uss nazariye se dekh hi nahi paoge ~ Piyush Garg Sir</p>
</blockquote>
<p>This is what our mentor told us in the beginning of our first class of Gen AI cohort , and this is absolutely true. This lecture was my first face off with generative AI and it was really over my expectation. Behind writing a simple “Hi” to ChatGPT and getting response “Hey, Whats Up!” in return , there are so many underlying mechanisms that works. Here are some of the key takeaways that I have personally got to know that I am diving into sips.</p>
<p>Nowadays, AI is everywhere, from personal assistant to all our software application. So being an engineer it becomes very essential to know about the jargons which common in the domain of AI. So lets start with some sips of Chai :</p>
<hr />
<p><a target="_blank" href="https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744102895553/d4c34df4-1dad-4563-9ce6-0b35c6091f35.png" alt class="image--center mx-auto" /></a></p>
<h3 id="heading-sip-1">Sip 1 :</h3>
<ul>
<li><p><strong>NLP (Natural Language Processing)</strong> → Its a way by which AI can understand human language and responds back in the same human language.</p>
</li>
<li><p><strong>Tokenisation</strong> → Its the process by which the AI breaks our input into chunks and further process it as tokens. Each AI model has its own way of tokenising inputs. For example, OpenAI (ChatGPT) uses a tokenising library called <a target="_blank" href="https://github.com/openai/tiktoken.git">TikToken</a>. Similarly other models like Gemini, Claude has its own algorithms of tokenisation.</p>
</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> tiktoken

encoder = tiktoken.encoding_for_model(<span class="hljs-string">'gpt-4o'</span>)

print(<span class="hljs-string">"Vocab Size"</span>, encoder.n_vocab) 

text = <span class="hljs-string">"The cat sat on the mat"</span>
tokens = encoder.encode(text)

print(<span class="hljs-string">"Tokens"</span>, tokens) 

my_tokens = [<span class="hljs-number">976</span>, <span class="hljs-number">9059</span>, <span class="hljs-number">10139</span>, <span class="hljs-number">402</span>, <span class="hljs-number">290</span>, <span class="hljs-number">2450</span>]
decoded = encoder.decode([<span class="hljs-number">976</span>, <span class="hljs-number">9059</span>, <span class="hljs-number">10139</span>, <span class="hljs-number">402</span>, <span class="hljs-number">290</span>, <span class="hljs-number">2450</span>])
print(<span class="hljs-string">"Decoded"</span>, decoded)
</code></pre>
<ul>
<li><strong>Vector Embeddings</strong> → The tokens are then converted into Vector Embeddings. Vector embeddings are basically some sort of numbers that helps the model to understand the input. In real life , it can be considered as a 3D graph, where each point is plotted according to their semantic meaning and these points are related to each other based on their semantic meaning. For example, if we check the vector point of the word “tea” in the <a target="_blank" href="https://projector.tensorflow.org/">Vector Embedding Projector</a>, we can see the near by points are “coffee”, “beverage”, “cola”, “drink” etc.</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> dotenv <span class="hljs-keyword">import</span> load_dotenv
<span class="hljs-keyword">from</span> openai <span class="hljs-keyword">import</span> OpenAI

load_dotenv()

client = OpenAI()

text = <span class="hljs-string">"Eiffel Tower is in Paris and is a famous landmark, it is 324 meters tall"</span>

response = client.embeddings.create(
    input=text,
    model=<span class="hljs-string">"text-embedding-3-small"</span>
)

print(<span class="hljs-string">"Vector Embeddings"</span>, response.data[<span class="hljs-number">0</span>].embedding)
</code></pre>
<h3 id="heading-sip-2">Sip 2 :</h3>
<ul>
<li>Attention &amp; Multi head Attention → Its a mechanism thorough which the tokens communicate with each other. In other words, though attention mechanism, the model can focus on the important (or effective) part of the input. Now, multi head attention means, parallel running of these attention mechanisms that helps the model to focus main concept of the input which helps the model to understand the context of the input.</li>
</ul>
<h3 id="heading-sip-3">Sip 3 :</h3>
<ul>
<li><p>FeedForward → It refers to the Neural network which is the brain of the transformer. It helps the model to understand complex patterns, behaviour and context of the input based on which it gives the output.</p>
</li>
<li><p>Softmax → It is a mathematical function which creates probability on prediction of next word. Based on this, we can decide the creativity of the model.</p>
</li>
<li><p>Temperature → It is the factor in an AI model which decides the creativity (i.e. randomness) in of the output. For instance, in case of high temperature, the model would respond with “the sky is blue” and in case of low temperatures the model would respond with the “The sky, a canvas of swirling nebulae, whispered ancient secrets”.</p>
<hr />
<p>  These were some my personal take aways from the lecture. It was an eye opening session, and yes I won’t see ChatGPT as I used to see it before.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Proof Of Work In Blockchain]]></title><description><![CDATA[Overview
Proof of Work (PoW) is a decentralized consensus mechanism used in blockchain networks to verify and add new transactions to the ledger. It works by requiring miners to solve complex computational puzzles to add a block to the blockchain, ea...]]></description><link>https://blogs.agnibhachakraborty.me/proof-of-work-in-blockchain</link><guid isPermaLink="true">https://blogs.agnibhachakraborty.me/proof-of-work-in-blockchain</guid><category><![CDATA[Blockchain]]></category><category><![CDATA[Bitcoin]]></category><category><![CDATA[Cryptocurrency]]></category><category><![CDATA[crypto]]></category><category><![CDATA[crypto wallet]]></category><category><![CDATA[Cryptography]]></category><category><![CDATA[Ethereum]]></category><category><![CDATA[Solidity]]></category><category><![CDATA[Web3]]></category><category><![CDATA[Web 3.0 Blockchain Market]]></category><category><![CDATA[web3 development]]></category><category><![CDATA[# THEORETICAL CYBER SECURITY DEFINED]]></category><dc:creator><![CDATA[Agnibha Chakraborty]]></dc:creator><pubDate>Fri, 26 Apr 2024 10:27:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/JrjhtBJ-pGU/upload/77198e35990d8f1f7b063c99b764b561.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-overview">Overview</h2>
<p>Proof of Work (PoW) is a decentralized consensus mechanism used in blockchain networks to verify and add new transactions to the ledger. It works by requiring miners to solve complex computational puzzles to add a block to the blockchain, earning cryptocurrency as a reward for their efforts. The purpose of PoW is to make it difficult and expensive to tamper with the blockchain, ensuring its integrity and security.</p>
<h2 id="heading-how-pow-works">How PoW Works</h2>
<p>Here's a simplified explanation of how PoW works:</p>
<ol>
<li><p>A miner receives a new transaction.</p>
</li>
<li><p>The miner creates a block that includes the transaction and other information.</p>
</li>
<li><p>The miner hashes the block header to create a unique identifier.</p>
</li>
<li><p>The miner attempts to find a nonce (a random number) that, when combined with the block header hash, produces a result less than or equal to the current target difficulty.</p>
</li>
<li><p>If the miner finds a valid nonce, they broadcast the block to the network.</p>
</li>
<li><p>Other nodes on the network verify that the block is valid and that the transaction is not already included in the blockchain.</p>
</li>
<li><p>If the block is valid, it is added to the blockchain and the miner is rewarded with cryptocurrency.</p>
</li>
</ol>
<h2 id="heading-key-components">Key Components</h2>
<ul>
<li><p><strong>Nodes</strong>: Devices that actively participate in the network and maintain a copy of the network's data, validating transactions, relaying messages, and maintaining the integrity of the network.</p>
</li>
<li><p><strong>Nonce</strong>: A number used only once, which miners guess to solve the puzzle and add the block to the blockchain.</p>
</li>
<li><p><strong>Difficulty Target</strong>: The range within which the miners' guesses must fall to solve the puzzle and add the block to the blockchain.</p>
</li>
<li><p><strong>Proof of Work</strong>: The process of solving complex computational puzzles to demonstrate the expenditure of computational effort and the right to add a block to the blockchain.</p>
</li>
</ul>
<h2 id="heading-advantages-and-disadvantages">Advantages and Disadvantages</h2>
<p>Advantages of PoW include:</p>
<ul>
<li><p>Security: PoW ensures the integrity and security of the blockchain by making it difficult for bad actors to overtake the network.</p>
</li>
<li><p>Decentralization: The network is not reliant on a central authority, making it more decentralized and resilient to censorship or single points of failure.</p>
</li>
</ul>
<p>Disadvantages of PoW include:</p>
<ul>
<li><p>Energy Consumption: PoW requires vast amounts of energy, especially as more miners join the network, leading to a growing carbon footprint.</p>
</li>
<li><p>Inefficiency: The process of mining and solving puzzles can be resource-intensive and time-consuming.</p>
</li>
</ul>
<p>In summary, PoW is a consensus mechanism used in blockchain networks to maintain the security and integrity of the blockchain. It involves miners solving complex computational puzzles to add blocks to the blockchain and earn cryptocurrency rewards. However, PoW has some drawbacks, such as high energy consumption and inefficiency.</p>
<p><strong>Citations:</strong> [1] <a target="_blank" href="http://blockchain.com">http://blockchain.com</a> [2] <a target="_blank" href="http://blockchaindemo.com">http://blockchaindemo.com</a> [3] <a target="_blank" href="https://blockworks.co/news/what-is-proof-of-work">https://blockworks.co/news/what-is-proof-of-work</a> [4] <a target="_blank" href="https://en.wikipedia.org/wiki/Proof_of_work">https://en.wikipedia.org/wiki/Proof_of_work</a> [5] <a target="_blank" href="https://www.investopedia.com/terms/p/proof-work.asp">https://www.investopedia.com/terms/p/proof-work.asp</a></p>
]]></content:encoded></item><item><title><![CDATA[Aggregation Pipelines in MongoDB]]></title><description><![CDATA[The concept of aggregation pipelines in MongoDB is considered one of the complex topics in MongoDB databases. Therefore, it is often found in SDE II or above job roles and rarely for SDE I.

In this concept, we generally consider that aggregation pip...]]></description><link>https://blogs.agnibhachakraborty.me/aggregation-pipelines-in-mongodb</link><guid isPermaLink="true">https://blogs.agnibhachakraborty.me/aggregation-pipelines-in-mongodb</guid><category><![CDATA[MongoDB]]></category><category><![CDATA[mongoose]]></category><category><![CDATA[chai-aur-backend]]></category><category><![CDATA[#HiteshChaudhary ]]></category><category><![CDATA[backend]]></category><category><![CDATA[#pwskills]]></category><dc:creator><![CDATA[Agnibha Chakraborty]]></dc:creator><pubDate>Fri, 08 Mar 2024 18:42:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/cijiWIwsMB8/upload/a469ad8c13bdc30b87344e2cbc352506.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The concept of aggregation pipelines in MongoDB is considered one of the complex topics in MongoDB databases. Therefore, it is often found in SDE II or above job roles and rarely for SDE I.</p>
<ul>
<li><p>In this concept, we generally consider that aggregation pipelines consists of one or more stages that processes the documents.</p>
</li>
<li><p>Each stage is used to perform a function on the input documents.</p>
</li>
<li><p>The documents that are output from a stage are passed to the next stage.</p>
</li>
<li><p>An aggregation pipeline can return results for groups of documents.</p>
</li>
</ul>
<pre><code class="lang-javascript"><span class="hljs-comment">// how to write aggregation pipelines.</span>
db.collection.aggregate([
    {}, <span class="hljs-comment">// first pipeline</span>
    {}  <span class="hljs-comment">// second pipeline </span>
])
</code></pre>
<h3 id="heading-some-stages-used-in-aggregation-pipelines">Some stages used in Aggregation Pipelines :</h3>
<ol>
<li><p><strong><mark>$match </mark></strong> : In MongoDB, the $match operator is used within the aggregation framework to filter documents based on specified criteria. It is similar to the WHERE clause in SQL. Here's how you can use it:</p>
<pre><code class="lang-javascript"> db.collection.aggregate([
   { <span class="hljs-attr">$match</span>: { <span class="hljs-attr">field</span>: value } }
 ])
</code></pre>
<p> In this example:</p>
<ul>
<li><p><code>db.collection.aggregate()</code> is used to perform aggregation operations on the collection.</p>
</li>
<li><p><code>{ $match: { field: value } }</code> is the stage where you specify the filtering criteria. Replace <code>field</code> with the field you want to filter on and <code>value</code> with the value you want to match.</p>
</li>
</ul>
</li>
</ol>
<p>    For instance, if you have a collection of documents representing books and you want to find books with a specific genre:</p>
<pre><code class="lang-javascript">    db.books.aggregate([
      { <span class="hljs-attr">$match</span>: { <span class="hljs-attr">genre</span>: <span class="hljs-string">"Science Fiction"</span> } }
    ])
</code></pre>
<p>    This query will return all documents from the <code>books</code> collection where the <code>genre</code> field is equal to "Science Fiction".</p>
<ol start="2">
<li><p><strong><mark>$lookup </mark></strong> : In MongoDB, the <code>$lookup</code> stage is used within the aggregation framework to perform a left outer join between documents from two collections. This allows you to combine data from multiple collections in a single query. Here's how you can use it:</p>
<pre><code class="lang-javascript"> db.collection1.aggregate([
   {
     <span class="hljs-attr">$lookup</span>: {
       <span class="hljs-attr">from</span>: <span class="hljs-string">"collection2"</span>,
       <span class="hljs-attr">localField</span>: <span class="hljs-string">"field1"</span>,
       <span class="hljs-attr">foreignField</span>: <span class="hljs-string">"field2"</span>,
       <span class="hljs-attr">as</span>: <span class="hljs-string">"outputField"</span>
     }
   }
 ])
</code></pre>
<p> In this example:</p>
<ul>
<li><p><code>db.collection1.aggregate()</code> is used to perform aggregation operations on <code>collection1</code>.</p>
</li>
<li><p><code>$lookup</code> is the stage where you specify the details of the join.</p>
</li>
<li><p><code>from</code> specifies the name of the collection to join with (<code>collection2</code>).</p>
</li>
<li><p><code>localField</code> specifies the field from the input documents (<code>collection1</code>) to join on (<code>field1</code>).</p>
</li>
<li><p><code>foreignField</code> specifies the field from the documents of the "from" collection (<code>collection2</code>) to join on (<code>field2</code>).</p>
</li>
<li><p><code>as</code> specifies the name of the output field that will contain the joined array.</p>
</li>
</ul>
</li>
</ol>
<p>    For instance, if you have two collections, <code>orders</code> and <code>products</code>, and you want to retrieve all orders with details of the corresponding products:</p>
<pre><code class="lang-javascript">    db.orders.aggregate([
      {
        <span class="hljs-attr">$lookup</span>: {
          <span class="hljs-attr">from</span>: <span class="hljs-string">"products"</span>,
          <span class="hljs-attr">localField</span>: <span class="hljs-string">"productId"</span>,
          <span class="hljs-attr">foreignField</span>: <span class="hljs-string">"_id"</span>,
          <span class="hljs-attr">as</span>: <span class="hljs-string">"productDetails"</span>
        }
      }
    ])
</code></pre>
<p>    In this example, <code>orders</code> and <code>products</code> are the collections, <code>productId</code> is the field in the <code>orders</code> collection that matches with the <code>_id</code> field in the <code>products</code> collection, and <code>productDetails</code> is the name of the output field that will contain the joined array with product details.</p>
<ol start="3">
<li><p><strong><mark>$addFields </mark></strong> : In MongoDB, the <code>$addFields</code> stage is used within the aggregation framework to add new fields to documents in the pipeline. This stage is particularly useful when you want to include computed fields or transform existing fields. Here's how you can use it:</p>
<pre><code class="lang-javascript"> db.collection.aggregate([
   {
     <span class="hljs-attr">$addFields</span>: {
       <span class="hljs-attr">newField</span>: expression
     }
   }
 ])
</code></pre>
<p> In this example:</p>
<ul>
<li><p><code>db.collection.aggregate()</code> is used to perform aggregation operations on the collection.</p>
</li>
<li><p><code>$addFields</code> is the stage where you specify the fields to be added.</p>
</li>
<li><p><code>newField</code> is the name of the new field you want to add.</p>
</li>
<li><p><code>expression</code> is the expression used to compute the value of the new field.</p>
</li>
</ul>
</li>
</ol>
<p>    For instance, if you have a collection of documents representing employees and you want to add a new field <code>totalSalary</code> that combines <code>salary</code> and <code>bonus</code>:</p>
<pre><code class="lang-javascript">    db.employees.aggregate([
      {
        <span class="hljs-attr">$addFields</span>: {
          <span class="hljs-attr">totalSalary</span>: { <span class="hljs-attr">$sum</span>: [<span class="hljs-string">"$salary"</span>, <span class="hljs-string">"$bonus"</span>] }
        }
      }
    ])
</code></pre>
<p>    In this example, <code>$sum</code> is an aggregation operator that calculates the sum of the provided array. <code>$salary</code> and <code>$bonus</code> are the existing fields in the documents, and <code>totalSalary</code> is the new field that will contain the sum of <code>salary</code> and <code>bonus</code> for each document.</p>
<p>    You can use any valid expression to compute the value of the new field, including arithmetic operations, functions, or even concatenation of strings.</p>
<ol start="4">
<li><p><strong><mark>$project</mark></strong> : In MongoDB, the <code>$project</code> stage is used within the aggregation framework to shape documents by including, excluding, or renaming fields. It allows you to reshape documents before passing them to the next stage in the aggregation pipeline. Here's how you can use it:</p>
<pre><code class="lang-javascript"> db.collection.aggregate([
   {
     <span class="hljs-attr">$project</span>: {
       <span class="hljs-attr">field1</span>: <span class="hljs-number">1</span>,          <span class="hljs-comment">// include field1</span>
       <span class="hljs-attr">field2</span>: <span class="hljs-number">1</span>,          <span class="hljs-comment">// include field2</span>
       <span class="hljs-attr">newField</span>: <span class="hljs-string">"$field3"</span>, <span class="hljs-comment">// include field3 and rename it as newField</span>
       <span class="hljs-attr">_id</span>: <span class="hljs-number">0</span>             <span class="hljs-comment">// exclude _id field</span>
     }
   }
 ])
</code></pre>
<p> In this example:</p>
<ul>
<li><p><code>db.collection.aggregate()</code> is used to perform aggregation operations on the collection.</p>
</li>
<li><p><code>$project</code> is the stage where you specify the fields to be included, excluded, or renamed.</p>
</li>
<li><p><code>field1: 1</code> and <code>field2: 1</code> include the fields <code>field1</code> and <code>field2</code> in the output document.</p>
</li>
<li><p><code>newField: "$field3"</code> includes <code>field3</code> in the output document but renames it as <code>newField</code>.</p>
</li>
<li><p><code>_id: 0</code> excludes the <code>_id</code> field from the output document.</p>
</li>
</ul>
</li>
</ol>
<p>    For instance, if you have a collection of documents representing employees and you only want to include their name and age fields in the output:</p>
<pre><code class="lang-javascript">    db.employees.aggregate([
      {
        <span class="hljs-attr">$project</span>: {
          <span class="hljs-attr">name</span>: <span class="hljs-number">1</span>,
          <span class="hljs-attr">age</span>: <span class="hljs-number">1</span>,
          <span class="hljs-attr">_id</span>: <span class="hljs-number">0</span>
        }
      }
    ])
</code></pre>
<p>    This will output documents containing only the <code>name</code> and <code>age</code> fields, with the <code>_id</code> field excluded.</p>
<p>    Additionally, you can use <code>$project</code> to create computed fields, apply expressions, or reshape documents according to your requirements.</p>
<blockquote>
<p>Database is always in another continent , therefore always use await.</p>
</blockquote>
<h3 id="heading-content-resources">Content Resources :</h3>
<ul>
<li><p>Courtesy : <a class="user-mention" href="https://hashnode.com/@hiteshchoudhary">Hitesh Choudhary</a></p>
</li>
<li><p>For detailed video explanation follow :</p>
<p>  %[https://youtu.be/SUZKhBvxW5c?si=rR4azTi6cJ4rNHn7]</p>
</li>
</ul>
]]></content:encoded></item></channel></rss>