欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 汽车 > 新车 > 批量ip反查域名工具

批量ip反查域名工具

2025/3/13 23:28:39 来源:https://blog.csdn.net/xc_214/article/details/146202640  浏览:    关键词:批量ip反查域名工具

0x01 工具简介:

        ReverseIP-CN 是一款专为中文网络环境优化的IP反查工具,能够快速查询指定IP/域名关联的所有网站,是网络安全检测、资产梳理的利器。

0x02 功能简介:

1、智能解析

  • 支持多种输入格式:URLIP域名
  • 自动清洗不规范输入
    (示例:http://baidu.com/ → baidu.com

2、高效查询

  • 多线程并发处理(默认5线程)
  • 国内接口优化,响应更快
  • 智能去重,结果精准

3、可视化报告

  • 终端高亮显示关键信息
  • Excel报告自动标注重点条目
  • 支持结果导出(.xlsx格式)

4、Api可扩展

  • 依赖api可进行自行扩展

0x03 依赖包安装:

pip install -r requirements.txt
# requirements.txtrequests>=2.26.0      # HTTP请求库
openpyxl>=3.0.9       # Excel文件处理

0x04 使用方法:

参数说明

参数全称说明
-u--url指定单个目标URL/IP
-l--list指定包含多个目标的文件路径
-o--output指定输出Excel文件名(可选)
-h--help显示帮助信息

查询单个目标:

python revip_cn.py -u "目标URL/IP"

批量读取:

python revip_cn.py -l 目标列表.txt -o 结果.xlsx

目标文件格式支持:

https://127.0.0.1
http://127.0.0.1
127.0.0.1:8080
aaa.example.com
https://127.0.0.1:8080
127.0.0.1

效果截图:

0x05 源码及项目地址: 

项目地址:

https://github.com/iSee857/ReverseIP-CN

欢迎各位师傅们提供可用、稳定的api。

欢迎提交issue或评论与我联系。

项目源码:

import re
import sys
import socket
import random
import getopt
import requests
import openpyxl
import time
from urllib.parse import urlparse
from openpyxl.styles import PatternFill
from openpyxl.utils import get_column_letter
from concurrent.futures import ThreadPoolExecutorHIGHLIGHT_FILL = PatternFill(start_color='FFFF00', fill_type='solid')
HEADER_FILL = PatternFill(start_color='DDDDDD', fill_type='solid')
VERSION = "V2.1"
AUTHOR = "iSee857"def print_banner():banner = f"""
██████╗ ███████╗███████╗██████╗ ███████╗██████╗ ███████╗
██╔══██╗██╔════╝██╔════╝██╔══██╗██╔════╝██╔══██╗██╔════╝
██████╔╝█████╗  █████╗  ██║  ██║█████╗  ██████╔╝███████╗
██╔══██╗██╔══╝  ██╔══╝  ██║  ██║██╔══╝  ██╔══██╗╚════██║
██║  ██║███████╗██║     ██████╔╝███████╗██║  ██║███████║
╚═╝  ╚═╝╚══════╝╚═╝     ╚═════╝ ╚══════╝╚═╝  ╚═╝╚══════╝
Reverse IP Lookup Tool {VERSION}
Author: {AUTHOR}
"""print(banner)def clean_target(target):"""智能清洗输入目标"""try:target = target.strip(" '\"")if not target.startswith(('http://', 'https://')):target = f'http://{target}'parsed = urlparse(target)hostname = parsed.hostnameif not hostname:return Nonereturn socket.gethostbyname(hostname)except Exception as e:print(f"解析失败: {str(e)}")return Nonedef user_agents():"""国内主流浏览器User-Agent"""return ["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36","Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:83.0) Gecko/20100101 Firefox/83.0","Mozilla/5.0 (Linux; Android 10; M2007J3SC) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Mobile Safari/537.36","Mozilla/5.0 (iPhone; CPU iPhone OS 14_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.1 Mobile/15E148 Safari/604.1"]def fetch_domains_cn(ip):"""使用国内接口查询IP关联域名"""headers = {'User-Agent': random.choice(user_agents()),'Referer': 'https://site.ip138.com/'}"""国内接口可配"""apis = [{'url': f'https://site.ip138.com/{ip}/','method': 'regex','pattern': r'<li><span class="date">.*?</span><a href="/(.*?)/" target="_blank">'},#站点关闭# {#     'url': f'https://api.webscan.cc/?query={ip}',#     'method': 'json',#     'field': 'domain'# }]domains = []for api in apis:try:session = requests.Session()session.trust_env = Falseresponse = session.get(api['url'],headers=headers,timeout=15,proxies={'http': None, 'https': None})if response.status_code != 200:continueif api['method'] == 'regex':matches = re.findall(api['pattern'], response.text)cleaned = [m.strip() for m in matches if m.strip()]domains.extend(cleaned)elif api['method'] == 'json':data = response.json()if isinstance(data, list):valid = [str(d.get(api['field'], '')).strip() for d in data]domains.extend([v for v in valid if v])# 去重并限制最大数量防止站点重定向domains = list(set(domains))[:50]time.sleep(random.uniform(1, 2))except Exception as e:print(f"接口 {api['url']} 查询失败: {str(e)}")continuereturn domainsdef process_target(target):"""处理目标"""ip = clean_target(target)if not ip:print(f"\n❌ 目标解析失败: {target}")print("-" * 50)return (target, None)domains = fetch_domains_cn(ip)original_host = urlparse(target).hostname or target.split('//')[-1].split('/')[0]highlighted_domains = [f"\033[93m{d}*\033[0m" if (ip in d or original_host in d) else d for d in domains if isinstance(d, str)]ip_display = f"\033[92m{ip}\033[0m" if domains else ipprint(f"\n► 原始输入: \033[94m{target}\033[0m")print(f"► 解析IP : {ip_display}")print(f"► 关联域名: {len(domains)} 个")if domains:print("  " + "\n  ".join(highlighted_domains))else:print("  未找到关联域名")print("-" * 50) return (target, {"ip": ip, "domains": domains})def export_results(results, filename):"""导出Excel"""wb = openpyxl.Workbook()ws = wb.activews.title = "反查结果"headers = ["原始输入", "IP地址", "关联域名"]for col, header in enumerate(headers, 1):ws.cell(row=1, column=col, value=header).fill = HEADER_FILLrow_idx = 2for target, data in results.items():if not data:ws.append([target, None, "解析失败"])continuedomains = data['domains']original_host = urlparse(target).hostnamehighlight = any((data['ip'] in d) or (original_host and original_host in d)for d in domains if isinstance(d, str))row = [target,data['ip'],"\n".join(domains) if domains else "无结果"]ws.append(row)if highlight:ws.cell(row=row_idx, column=3).fill = HIGHLIGHT_FILLrow_idx += 1for col in ws.columns:max_len = max(len(str(cell.value)) for cell in col)ws.column_dimensions[get_column_letter(col[0].column)].width = max_len + 2wb.save(filename)print(f"\n✅ 结果已保存到: {filename}")def main(argv):print_banner()targets = []output = "results.xlsx"try:opts, args = getopt.getopt(argv, "hu:l:o:", ["help", "url=", "list=", "output="])except getopt.GetoptError:print("参数错误!使用 -h 查看帮助")sys.exit(2)for opt, arg in opts:if opt == '-h':print(f"Usage: {sys.argv[0]} [-u URL/IP] [-l FILE] [-o FILE]")sys.exit()elif opt in ("-u", "--url"):targets.append(arg)elif opt in ("-l", "--list"):try:with open(arg, 'r') as f:targets.extend(line.strip() for line in f if line.strip())except FileNotFoundError:print(f"文件不存在: {arg}")sys.exit(1)elif opt in ("-o", "--output"):output = argif not targets:print("请指定目标(-u/-l)")sys.exit(1)print(f"\n🔍 开始处理 {len(targets)} 个目标...")results = {}with ThreadPoolExecutor(max_workers=5) as executor:futures = [executor.submit(process_target, t) for t in targets]for future in futures:target, data = future.result()results[target] = dataexport_results(results, output)if __name__ == "__main__":main(sys.argv[1:])

0x06 注意事项:

国内接口有频率限制,建议:

        单次批量查询不超过50个目标

        每个查询间隔1-2秒

结果文件会自动高亮显示:

        黄色:包含原始域名的结果        

        绿色IP:表示该IP存在关联域名

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

热搜词