【关键字】毕业论文
Java 堆
Java 堆,每个 Java 对象在其中分配,是您在编写 Java 应用程序时使用最频繁的内存区域。JVM 设计用于将我们与主机的特性隔离,所以将内存当作堆来考虑再正常不过了。您一定遇到过 Java 堆 OutOfMemoryError ,它可能是由于对象泄漏造成的,也可能是因为堆的大小不足以保存所有数据,您也可能了解这些场景的一些调试技巧。但是随着您的 Java 应用程序处理越来越多的数据和越来越多的并发负载,您可能就会遇到无法使用常规技巧进行修复的OutOfMemoryError。在一些场景中,即使 java 堆未满,也会抛出错误。当这类场景发生时,您需要理解 Java 运行时环境(Java Runtime Environment,JRE)内部到底发生了什么。
Java 应用程序在 Java 运行时的虚拟化环境中运行,但是运行时本身是使用 C 之类的语言编写的本机程序,它也会耗用本机资源,包括本机内存。本机内存是可用于运行时进程的内存,它与 Java 应用程序使用的 java 堆内存不同。每种虚拟化资源(包括 Java 堆和 Java 线程)都必须保存在本机内存中,虚拟机在运行时使用的数据也是如此。这意味着主机的硬件和操作系统施加在本机内存上的会影响到 Java 应用程序的性能。
硬件
本机进程遇到的许多都是由硬件造成的,而与操作系统没有关系。每台计算机都有一个处理器和一些随机存取保存器(RAM),后者也称为物理内存。处理器将数据流解释为要执行的指令,它拥有一个或多个处理单元,用于执行整数和浮点运算以及更高级的计算。处理器具有许多寄存器 —— 常快速的内存元素,用作被执行的计算的工作保存,寄存器大小决定了一次计算可使用的最大数值。
处理器通过内存总线连接到物理内存。物理地址(处理器用于索引物理 RAM 的地址)的大小了可以寻址的内存。例如,一个 16 位物理地址可以寻址 0x0000 到 0xFFFF 的内存地址,这个地址范围包括 2^16 = 65536 个惟一的内存位置。如果每个地址引用一个保存字节,那么一个 16 位物理地址将允许处理器寻址 KB 内存。
1文档收集于互联网,如有不妥请联系删除.
文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持.
处理器被描述为特定数量的数据位。这通常指的是寄存器大小,但是也存在例外,比如 32 位 390 指的是物理地址大小。对于桌面和服务器平台,这个数字为 31、32 或 ;对于嵌入式设备和微处理器,这个数字可能小至 4。物理地址大小可以与寄存器带宽一样大,也可以比它大或小。如果在适当的操作系统上运行,大部分 位处理器可以运行 32 位程序。
操作系统和虚拟内存
如果您编写无需操作系统,直接在处理器上运行的应用程序,您可以使用处理器可以寻址的所有内存(假设连接到了足够的物理 RAM)。但是要使用多任务和硬件抽象等特性,几乎所有人都会使用某种类型的操作系统来运行他们的程序。
在 Aix 等多任务操作系统中,有多个程序在使用系统资源。需要为每个程序分配物理内存区域来在其中运行。可以设计这样一个操作系统:每个程序直接使用物理内存,并且可以可靠地仅使用分配给它的内存。一些嵌入式操作系统以这种方式工作,但是这在包含多个未经过集中测试的应用程序的环境中是不切实际的,因为任何程序都可能破坏其他程序或者操作系统本身的内存。
虚拟内存 允许多个进程共享物理内存,而且不会破坏彼此的数据。在具有虚拟内存的操作系统(比如 Windows、Linux 和许多其他操作系统)中,每个程序都拥有自己的虚拟地址空间 —— 一个逻辑地址区域,其大小由该系统上的地址大小规定(所以,桌面和服务器平台的虚拟地址空间为 31、32 或 位)。进程的虚拟地址空间中的区域可被映射到物理内存、文件或任何其他可寻址保存。操作系统可以将物理内存中的数据移动到未使用的交换区,以便于最充分地利用物理内存。当程序尝试使用虚拟地址访问内存时,操作系统结合片上硬件将该虚拟地址映射到物理位置。该位置可以是物理 RAM、文件或交换区。如果一个内存区域被移动到交换空间,那么它将在被使用之前加载回物理内存中。
在 AIX 上,进程是关于 OS 控制资源(比如文件和套接字信息)、虚拟地址空间以及至少一个执行线程的一系列信息。虽然 32 位地址可以引用 4GB 数据,但程序不能独自使用整个 4GB 地址空间。与其他操作系统一样地址空间分为多个部分,程序只能使用其中的一些部分;其余部分供操作系统使用。与
2文档收集于互联网,如有不妥请联系删除.
文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持.
Windows 和 Linux 相比,AIX 内存模型更加复杂并且可以更加精确地进行优化。AIX 32 位内存模型被分成 16 个 256MB 分段进行管理。
用户程序只能直接控制 16 个分段中的 12 个 — 即 4GB 中的 3GB。最大的是,本机堆和所有线程栈都保存在分段 2 中。为了适应对数据需求较高的程序,AIX 提供了一个大内存模型。
大内存模型允许程序员或用户附加一些共享/映射分段作为本机堆使用,通过在构建可执行程序时提供一个链接器选项或者在程序启动之前设置 LDR_CNTRL 环境变量。要在运行时支持大内存模型,需要设置 LDR_CNTRL=MAXDATA=0xN0000000。其中, N 位于 1 和 8 之间。超过此范围的任何值都会造成操作系统使用默认内存模型。在大内存模型中,本机堆从分段 3 开始;分段 2 仅用于原始(初始)线程栈。
当您使用大内存模型时,分段分配是静态的;也就是说,如果你请求 4 个数据分段(1GB 本机堆),但是仅分配 1 个本机堆分段(256MB),则其他 3 个数据分段将不能用于内存映射。
如果您希望本机堆大于 2GB,并且运行的是 AIX 5.1 或更高版本,那么您可以使用 AIX 超大内存模型。与大内存模型类似,可以通过一个链接器选项或在运行时使用 LDR_CNTRL 环境变量来为编译时的可执行程序启用超大内存模型。要在运行时启用超大内存模型,需置 LDR_CNTRL=MAXDATA=0xN0000000@DSA。其中, N 位于 0 和 D 之间(如果您使用 AIX 5.2 或更高版本),或于 1 和 A 之间(如果您使用 AIX 5.1)。 N 值指定可用于本机堆的分段数量,但与大内存模型不同,这些分段可以在必要时用于映射。
通常,IBM Java 运行时使用超大内存模型,除非它被 LDR_CNTRL 环境变量覆盖。
将 N 设置为 1 和 A 之间,这会使用 3 和 C 之间的分段作为本机保存。在 AIX 5.2 中,将 N 设置为 B 或更多会更改内存布局 — 它不再使用 D 和 F 作为共享库,并且允许它们用于本机保存或映射。将 N 设置为 D 可分配最多 13 个分段(3.25GB)的堆。将 N 设置为 0 可允许分段 3 到 F 用于映射 — 本机堆保存在分段 2 中。
3文档收集于互联网,如有不妥请联系删除.
文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持.
本机内存泄漏或本机内存过度使用会造成各种问题,这取决于您是耗尽了地址空间还是用完了物理内存。耗尽地址空间通常只发生在 32 位进程中 — 因为可以轻松地分配最大 4GB 地址空间。 位进程的用户空间可以达到上千 GB,并且难以用完。如果您确实耗尽了 Java 进程的地址空间,则 Java 运行时会开始出现一些奇怪的症状,本文将在稍后讨论这些情况。在进程地址空间大于物理内存的系统中,内存泄漏或本机内存过度使用会迫使操作系统提供一些虚拟地址空间。访问操作系统提供的内存地址要比读取(物理内存中的)常驻地址慢很多,因为必须硬盘驱动器加载它。
如果您同时尝试使用过多 RAM 虚拟内存,造成数据无法保存在物理内存中,则系统挂起(thrash)— 也就是花费大多数时间在交换空间与内存之间来回复制数据。出现这种情况时,计算机和各应用程序的性能将变得很差,用户会立即觉察到出现了问题。当 JVM 的 Java 堆被换出时,垃圾收集器的性能将变得极差,甚至会造成应用程序挂起。如果多个 Java 运行时在一台机器上同时运行,则物理内存必须满足所有 Java 堆的需要。
Java 运行时如何使用本机内存
Java 运行时是一个 OS 进程,它受上一节所提到的硬件及操作系统。运行时环境提供由一些未知用户代码驱动的功能;这使得无法预测运行时环境在各种情况下需要哪些资源。Java 应用程序在托管 Java 环境中采取的每一个措施都有可能影响提供该环境的运行时的资源需求。本节讨论 Java 应用程序消耗本机内存的方式及原因。
Java 堆和垃圾收集
Java 堆是分配给对象的内存区。IBM Developer Kits for Java Standard Edition 拥有一个物理堆,但一些专门的 Java 运行时,比如 IBM WebSphere Real Time,则有多个堆。堆可以分为多个部分,例如 IBM gencon 策略的 nursery 和 tenured 区。大多数 Java 堆都是作为本机内存的相邻 slab 实现的。
控制堆大小的方法是在 Java 命令行中使用 -Xmx 和 -Xms 选项(mx 是堆的最大大小,ms 是初始大小)。虽然逻辑堆(活跃使用的内存区)将根据堆中对象的数量和垃圾收集(CG)所花费的时间增大或缩小,但所使用的
4文档收集于互联网,如有不妥请联系删除.
文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持.
本机内存量仍然保持不变,并且将由 -Xmx 值(最大堆大小)决定。内存管理器依赖作为相邻内存 slab 的堆,因此当堆需要扩展时无法分配更多本机内存;所有堆内存必须预先保留。
保留本机内存与分配它不同。保留本机内存时,它不受物理内存或其他保存的支持。虽然保留地址空间块不会耗尽物理资源,但它确实能防止内存用于其他目的。保留从未使用的内存造成的泄漏与已分配内存的泄漏同样严重。
AIX 上的 IBM 垃圾收集器将最大限度减少物理内存的使用,当使用的堆区域减少时,它会释放堆的备份保存。
对于大多数 Java 应用程序,Java 堆是最大的进程地址空间使用者,因此 Java 启动程序使用 Java 堆大小来确定如何配置地址空间。
即时(Just-in-time,JIT)编译器
JIT 编译器在运行时将 Java 字节码编译为优化的二进制码。这将极大地改善 Java 运行时的速度,并允许 Java 应用程序的运行速度能与本机代码相提并论。
编译字节码将使用本机内存(就像静态编译器一样,比如 gcc,需要内存才能运行),但是 JIT 的输出(可执行代码)也可以保存在本机内存中。包含许多经过 JIT 编译的方法的 Java 应用程序比较小的应用程序使用更多本机内存。
类和类加载器
Java 应用程序由定义对象结构和方法逻辑的类组成。它们还使用 Java 运行时类库中的类(比如 ,并且可以使用第三方库。这些类需要在它们的使用期间保存在内存中。
Java 5 之后的 IBM 实现为各类加载器分配本机内存 slab,用于保存类数据。Java 5 中的共享类技术将共享内存中的某个区域映射到保存只读(因此可以共享)类数据的地址空间。当多个 JVM 在同一台机器上运行时,这将减少保存类数据所需的物理内存量。共享类还可以改善 JVM 的启动时间。
共享类系统将固定大小的共享内存区域映射到地址空间。可以不完全占用共享类缓存,并且其中还可以包含当前未使用的类(由其他 JVM 载入),因此使用共享类将比未使用共享类占用更多地址空间(但物理内存较少)。需要重点注
5文档收集于互联网,如有不妥请联系删除.
文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持.
意,共享类不能防止类加载器取消加载 — 但它会造成类数据的一个子集保留在类缓存中。
加载更多类需要使用更多本机内存。每个类加载器还有各自的本机内存开销 — 因此让许多类加载分别加载一个类会比让一个类加载器许多类使用更多本机内存。记住,不仅您的应用程序类需要占用内存;框架、应用服务器、第三方库和 Java 运行时都包含根据需要加载且占用空间的类。
Java 运行时可以卸载类以回收空间,但仅限于一些严格的条件下。不能卸载单个类;而应卸载类加载器,其对象是加载的所有类。卸载类加载器的条件仅限于:
Java 堆未包含到表示该类加载器的 对象的引用。
Java 堆未包含到表示该类加载器加载的类的任何 对象的引用。 该类加载器加载的任何类的对象在 Java 堆中都处于非活动状态(即未被引用)。
注意,Java 运行时为所有 Java 应用程序创建的 3 个类默认加载器 — bootstrap、extension 和 application — 永远都无法满足这些标准;因此,通过应用程序类加载器加载的任何系统类(比如
即使类加载器可用于收集,但运行时只将类加载器作为 GC 周期的一部分进行收集。IBM gencon GC 策略(通过 -Xgcpolicy:gencon 命令行参数启用)仅卸载主要(tenured)收集上的类加载器。如果某个应用程序正在运行 gencon 策略并创建和释放许多类加载器,则您会发现大量本机内存在 tenured 收集期间由可收集的类加载器保存。
还可以在运行时生成类,而不需要您释放它。许多 JEE 应用程序使用 JavaServer Pages (JSP) 技术生成 Web 页面。使用 JSP 为执行的各个 . jsp 页面生成类,该类的持续时间为加载它们的类加载器的生存期 — 通常为 Web 应用程序的生存期。
生成类的另一个种常用方法是使用 Java 反射。使用 API 时,Java 运行时必须将反射对象的方法(比如 “访问方法” 可以使用 Java Native Interface (JNI),它需要的设置非常少但运行缓慢,或者它可以在运行时动态
6文档收集于互联网,如有不妥请联系删除.
文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持.
地为您希望反射的各对象类型构建一个类。后一种方法设置较慢,但运行更快,因此它对于经常反射特定类的应用程序非常理想。
在最初几次反射类时,Java 运行时使用 JNI 方法。但是在使用了几次之后,访问方法将扩展到字节访问方法中,该方法涉及构建一个类并通过一个新的类加载器来加载它。执行大量反射会造成创建许多访问程序类和类加载器。保留到反射对象的引用会造成这些类保持为活动状态并继续占用空间。由于创建字节码访问程序相当慢,因此 Java 运行时可以缓存这些访问程序供稍后使用。一些应用程序和框架还缓存反射对象,因此会增加它们的本机内存占用。
您可以使用系统属性控制反射访问程序行为。IBM Developer Kit for Java 5.0 的默认扩展阀值(JNI 存取器在扩展到字节码存取器中之前的使用次数)是 15。您可以通过设置 系统属性来修改该值。您可以在 Java 命令行中通过 - 来设置它。如果您将 inflationThreshold 设置为 0 或更小的值,则存取器将永远不会扩展。如果您发现应用程序要创建许多,则这种设置非常有用。
另一种(极易造成误解的)设置也会影响反射存取器。- 会完全禁用扩展,但它会造成字节码存取器滥用。使用 - 会增加反射类加载器占用的地址空间量,因为会创建更多的类加载器。
您可以通过 javacore 转储来测量类和 JIT 代码在 Java 5 及以上版本中使用了多少内存。javacore 是一个纯文本文件,它包含转储发生时 Java 运行时的内部状态的概述 — 包括关于已分配本机内存分段的信息。较新版本的 IBM Developer Kit for Java 5 和 6 将内存使用情况归讷在 javacore 中,对于较老版本(Java 5 SR10 和 Java 6 SR3 之前),本文的示例代码包包括一个 Perl 脚本,可以用于分配和呈现数据 。如果要运行它,您需要 Perl 解释器,它可以用于 AIX 和其他平台。
译文:
The Java heap
The heap of Java, where every Java object is allocated, is the area of memory you're most intimately connected with when writing Java applications. The JVM was designed to insulate us from the host machine's
7文档收集于互联网,如有不妥请联系删除.
文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持.
peculiarities, so it's natural to think about the heap when you think about memory. You've no doubt encountered a Java heap OutOfMemoryError — caused by an object leak or by not making the heap big enough to store all your data — and have probably learned a few tricks to debug these scenarios. But as your Java applications handle more data and more concurrent load, you may start to experience OutOfMemoryErrors that can't be fixed using your normal bag of tricks — scenarios in which the errors are thrown even though the Java heap isn't full. When this happens, you need to understand what is going on inside your Java Runtime Environment (JRE).
A Java applications run in the Java virtualized environment of the Java runtime, but the runtime itself is a native program written in a language (such as C) that consumes native resources, including native memory. Native memory is the memory available to the runtime process, as distinguished from the Java heap memory that a Java application uses. Every virtualized resource — including the Java heap and Java threads — must be stored in native memory, along with the data used by the virtual machine as it runs. This means that the limitations on native memory imposed by the host machine's hardware and operating system (OS) affect what you can do with your Java application.
This article is one of two covering the same topic on different platforms. In both, you'll learn what native memory is, how the Java runtime uses it, what running out of it looks like, and how to debug a native OutOfMemoryError. This article covers AIX and focuses on the IBM® Developer Kit for Java.
Though Many of the restrictions that a native process experiences are imposed by the hardware, not the OS. Every computer has a processor and some random-access memory (RAM), also known as physical memory. A processor interprets a stream of data as instructions to execute; it has one or more processing units that perform integer and floating-point arithmetic as well as more advanced computations. A processor has a number of registers — very fast memory elements
8文档收集于互联网,如有不妥请联系删除.
文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持.
that are used as working storage for the calculations that are performed; the register size determines the largest number that a single calculation can use.
The processor is connected to physical memory by the memory bus. The size of the physical address (the address used by the processor to index physical RAM) limits the amount of memory that can be addressed. For example, a 16-bit physical address can address from 0x0000 to 0xFFFF, which gives 2^16 = 65536 unique memory locations. If each address references a byte of storage, a 16-bit physical address would allow a processor to address KB of memory.
Processors are described as being a certain number of bits. This normally refers to the size of the registers, although there are exceptions — such as 390 31-bit — where it refers to the physical address size. For desktop and server platforms, this number is 31, 32, or ; for embedded devices and microprocessors, it can be as low as 4. The physical address size can be the same as the register width but could be larger or smaller. Most -bit processors can run 32-bit programs when running a suitable OS.
Operating systems and virtual memory
If you were writing applications to run directly on the processor without an OS, you could use all memory that the processor can address (assuming enough physical RAM is connected). But to enjoy features such as multitasking and hardware abstraction, nearly everybody uses an OS of some kind to run their programs.
In multitasking OSs, including AIX, more than one program uses system resources, including memory. Each program needs to be allocated regions of physical memory to work in. It's possible to design an OS such that every program works directly with physical memory and is trusted to use only the memory it has been given. Some embedded OSs work like this, but it's not practical in an environment consisting of many programs that are not tested together because any program could corrupt the memory of other programs or the OS itself.
Virtual memory allows multiple processes to share physical memory without being able to corrupt one another's data. In an OS with virtual memory (such as AIX and many others), each program has its own virtual address space — a logical region
9文档收集于互联网,如有不妥请联系删除.
文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持.
of addresses whose size is dictated by the address size on that system (so 31, 32, or bits for desktop and server platforms). Regions in a process's virtual address space can be mapped to physical memory, to a file, or to any other addressable storage. The OS can move data held in physical memory to and from a swap area when it isn't being used, to make the best use of physical memory. When a program tries to access memory using a virtual address, the OS in combination with on-chip hardware maps that virtual address to the physical location. That location could be physical RAM, a file, or the swap partition. If a region of memory has been moved to swap space, then it's loaded back into physical memory before being used.
Each instance of a native program runs as a process. On AIX a process is a collection of information about OS-controlled resources (such as file and socket information), a virtual address space, and at least one thread of execution.
Although a 32-bit address can reference 4GB of data, a program is not given the entire 4GB address space for its own use. As with other OS the address space is divided up into sections, only some of which are available for a program to use; the OS uses the rest. Compared to Windows and Linux, the AIX memory model is more complicated and can be tuned more precisely.
The AIX 32-bit memory model is divided and managed as 16 256MB segments. Figure 2 shows the layout of the default 32-bit AIX memory model.
The user program can only directly control 12 out of 16 segments — 3 out of 4GB. The most significant restriction is that the native heap and all thread stacks are held in segment 2. To accommodate programs with larger data requirements, AIX provides the large memory model.
The large memory model allows a programmer or a user to annex some of the shared/mapped segments for use as native heap either by supplying a linker option when the executable is built or by setting the LDR_CNTRL environment variable before the program is started. To enable the large memory model at run time, set LDR_CNTRL=MAXDATA=0xN0000000 where N is between 1 and8. Any value outside this range will cause the default memory model to be used. In the large
10文档收集于互联网,如有不妥请联系删除.
文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持.
memory model, the native heap starts at segment 3; segment 2 is only used for the primordial (initial) thread stack.
When you use the large memory model, the segment allocation is static; that is, if you request four data segments (for 1GB of native heap) but then only allocate one segment (256MB) of native heap, the other three data segments are unavailable for memory mapping.
If you want a native heap larger than 2GB and you are running AIX 5.1 or later, you can use the AIX very large memory model. The very large memory model, like the large memory model, can be enabled for an executable at compile time with a linker option or at run time using the LDR_CNTRL environment variable. To enable the very large memory model at run time,
setLDR_CNTRL=MAXDATA=0xN0000000@DSA where N is between 0 and D if you use AIX 5.2 or greater, or between 1 and A if you are using AIX 5.1. The value of N specifies the number of segments that can be used for native heap but, unlike in the large memory model, these segments can be used for mmapping if necessary.
The IBM Java runtime uses the very large memory model unless it's overridden with the LDR_CNTRL environment variable.
Setting N between 1 and A will use the segments between 3 and C for native storage as you would expect. From AIX 5.2, setting Nto B or higher changes the memory layout — it no longer uses segments D and F for shared libraries and allows them to be used for native storage or mmapping. Setting N to D gives the maximum 13 segments (3.25GB) of native heap. Setting N to 0allows segments 3 through F to be used for mmapping — the native heap is held in segment 2.
A native memory leak or excessive native memory use will cause different problems depending on whether you exhaust the address space or run out of physical memory. Exhausting the address space typically only happens with 32-bit processes — because the maximum 4GB of address space is easy to allocate. A -bit process has a user space of hundreds or thousands of gigabytes and is hard to fill up even if you try. If you do exhaust the address space of a Java process, then the Java runtime can start to show the odd symptoms I'll describe later in the article. When running on
11文档收集于互联网,如有不妥请联系删除.
文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持.
a system with more process address space than physical memory, a memory leak or excessive use of native memory will force the OS to swap out some of the virtual address space. Accessing a memory address that has been swapped is a lot slower than reading a resident (in physical memory) address because it must be loaded from the hard drive.
If you are simultaneously trying to use so much RAM-backed virtual memory that your data cannot be held in physical memory, the system will thrash — that is, spend most of its time copying memory back and forth from swap space. When this happens, the performance of the computer and the individual applications will become so poor the user can't fail to notice there's a problem. When a JVM's Java heap is swapped out, the garbage collector's performance becomes extremely poor, to the extent that the application may appear to hang. If multiple Java runtimes are in use on a single machine at the same time, the physical memory must be sufficient to fit all of the Java heaps.
How the Java runtime uses native memory
The Java runtime is an OS process that is subject to the hardware and OS constraints I outlined in the preceding section. Runtime environments provide capabilities that are driven by some unknown user code; that makes it impossible to predict which resources the runtime environment will require in every situation. Every action a Java application takes inside the managed Java environment can potentially affect the resource requirements of the runtime that provides that environment. This section describes how and why Java applications consume native memory.
The Java heap and garbage collection.
The Java heap is the area of memory where objects are allocated. The IBM Developer Kits for Java Standard Edition have one physical heap, although some specialist Java runtimes such as IBM WebSphere Real Time have multiple heaps.
The heap can be split up into sections such as the IBM gencon policy's nursery and tenured areas. Most Java heaps are implemented as contiguous slabs of native memory.
12文档收集于互联网,如有不妥请联系删除.
文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持.
The heap's size is controlled from the Java command line using the -Xmx and -Xms options (mx is the maximum size of the heap,ms is the initial size). Although the logical heap (the area of memory that is actively used) will grow and shrink according to the number of objects on the heap and the amount of time spent in garbage collection (GC), the amount of native memory used remains constant and is dictated by the -Xmx value: the maximum heap size. The memory manager relies on the heap being a contiguous slab of memory, so it's impossible to allocate more native memory when the heap needs to expand; all heap memory must be reserved up front.
Reserving native memory is not the same as allocating it. When native memory is reserved, it is not backed with physical memory or other storage. Although reserving chunks of the address space will not exhaust physical resources, it does prevent that memory from being used for other purposes. A leak caused by reserving memory that is never used is just as serious as leaking allocated memory.
The IBM garbage collector on AIX minimises the use of physical memory by decommitting (releasing the backing storage for) sections of the heap as the used area of heap shrinks.
For most Java applications, the Java heap is the largest user of process address space, so the Java launcher uses the Java heap size to decide how to configure the address space. Table 2 lists the default memory model configuration for different ranges of heap size. You can override the memory model by setting the LDR_CNTRL environment variable yourself before starting the Java launcher. If you are embedding the Java runtime or writing your own launcher, you will need to configure the memory model yourself — either by specifying the appropriate linker flag or by setting LDR_CNTRL before starting your launcher.
The Just-in- time (JIT) compiler
The JIT compiler compiles Java bytecode to optimised native binary code at run time. This vastly improves the run-time speed of Java runtimes and allows Java applications to run at speeds comparable to native code.
13文档收集于互联网,如有不妥请联系删除.
文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持.
Compiling bytecode uses native memory (in the same way that a static compiler such as gcc requires memory to run), but the output from the JIT (the executable code) also mist be stored in native memory. Java applications that contain many JIT-compiled methods use more native memory than smaller applications.
Classes and classloaders
Java applications are composed of classes that define object structure and method logic. They also use classes from the Java runtime class libraries (such as and may use third-party libraries. These classes need to be stored in memory for as long as they are being used.
The IBM implementation from Java 5 onward allocates slabs of native memory for each classloader to store class data in. The shared-classes technology in Java 5 and above maps an area of shared memory into the address space where read-only (and therefore shareable) class data is stored. This reduces the amount of physical memory required to store class data when multiple JVMs run on the same machine. Shared classes also improves JVM start-up time.
The shared-classes system maps a fixed-size area of shared memory into the address space. The shared class cache might not be completely occupied or might contain classes that you are not currently using (that have been loaded by other JVMs), so it's quite likely that using shared classes will occupy more address space (although less physical memory) than running without shared classes. It's important to note that shared classes doesn't prevent classloaders being unloaded — but it does cause a subset of the class data to remain in the class cache. See Resources for more information about shared classes.
Loading more classes uses more native memory. Each classloader also has a native-memory overhead — so having many classloaders each loading one class uses more native memory than having one classloader that loads many classes. Remember that it's not only your application classes that need to fit in memory; frameworks, application servers, third-party libraries, and Java runtimes contain classes that are loaded on demand and occupy space.
14文档收集于互联网,如有不妥请联系删除.
文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持.
The Java runtime can unload classes to reclaim space, but only under strict conditions. It's impossible to unload a single class; classloaders are unloaded instead, taking all the classes they loaded with them. A classloader can only be unloaded only if: the Java heap contains no references to the object that represents that classloader.
The Java heap contains no references to any of the objects that represent classes loaded by that classloader.
No objects of any class loaded by that classloader are alive (referenced) on the Java heap.
It's worth noting that the three default classloaders that the Java runtime creates for all Java applications — bootstrap, extension, and application — can never meet these criteria; therefore, any system classes (such as or any application classes loaded through the application classloader can't be released.
Even when a classloader is eligible for collection, the runtime only collects classloaders as part of a GC cycle. The IBM gencon GC policy (enabled with the -Xgcpolicy:gencon command-line argument) unloads classloaders only on major (tenured) collections. If an application is running the gencon policy and creating and releasing many classloaders, you can find that large amounts of native memory are held by collectable classloaders in the period between tenured collections. See Resources to find out more about the different IBM GC policies.
It's also possible for classes to be generated at run time, without you necessarily realising it. Many JEE applications use JavaServer Pages (JSP) technology to produce Web pages. Using JSP generates a class for each .jsp page executed that will last the lifetime of the classloader that loaded them — typically the lifetime of the Web application.
Another common way to generate classes is by using Java reflection. When using the API, the Java runtime must connect the methods of a reflecting object (such as to the object or class being reflected on. This \"accessor\" can use the Java Native Interface (JNI), which requires very little setup but is slow to run, or it can build a class dynamically at run time for each object type you want to reflect on. The latter
15文档收集于互联网,如有不妥请联系删除.
文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持.
method is slower to set up but faster to run, making it ideal for applications that reflect on a particular class often.
The Java runtime uses the JNI method the first few times a class is reflected on, but after being used a number of times, the accessor is inflated into a bytecode accessor, which involves building a class and loading it through a new classloader. Doing lots of reflection can cause many accessor classes and classloaders to be created. Holding references to the reflecting objects causes these classes to stay alive and continue occupying space. Because creating the bytecode accessors is quite slow, the Java runtime can cache these accessors for later use. Some applications and frameworks also cache reflection objects, thereby increasing their native footprint.
You can control the reflection accessor behaviour using system properties. The default inflation threshold (the number of times a JNI accessor is used before being inflated into a bytecode accessor) for the IBM Developer Kit for Java 5.0 is 15. You can modify this by setting the system property. You can set this on the Java command line with - If you set the inflationThreshold to 0 or less, then the accessors will never be inflated. This can be useful if you find that your application is creating many (the classloaders used to load the bytecode accessors).
Another (much misunderstood) setting also affects the reflection accessors. - disables inflation entirely but, counterintuitively, causes bytecode accessors to be used for everything. Using - the amount of address space consumed by reflection classloaders because many more are created.
You can measure how much memory is being used for classes and JIT code at Java 5 and above by taking a javacore dump. A javacore is a plain-text file containing a summary of the Java runtime's internal state when the dump was taken — including information about allocated native memory segments. Newer versions of the IBM Developer Kit for Java 5 and 6 summarise the memory use in the javacore, for older versions (prior to Java 5 SR10 and Java 6 SR3) this article's sample code package includes a Perl script you can to collate and present the data (see Downloads). To run it you need the Perl interpreter, which is available for AIX and other platforms.
16文档收集于互联网,如有不妥请联系删除.
文档来源为:从网络收集整理.word版本可编辑.欢迎下载支持.
此文档是由网络收集并进行重新排版整理.word可编辑版本!
17文档收集于互联网,如有不妥请联系删除.
因篇幅问题不能全部显示,请点此查看更多更全内容
Copyright © 2019- cepb.cn 版权所有 湘ICP备2022005869号-7
违法及侵权请联系:TEL:199 18 7713 E-MAIL:2724546146@qq.com
本站由北京市万商天勤律师事务所王兴未律师提供法律服务